Mar 12 01:38:06.069226 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Wed Mar 11 23:23:33 -00 2026 Mar 12 01:38:06.069254 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=0e4243d51ac00bffbb09a606c7378a821ca08f30dbebc6b82c4452fcc120d7bc Mar 12 01:38:06.069272 kernel: BIOS-provided physical RAM map: Mar 12 01:38:06.069280 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Mar 12 01:38:06.069290 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Mar 12 01:38:06.069332 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Mar 12 01:38:06.069344 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Mar 12 01:38:06.069353 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Mar 12 01:38:06.069362 kernel: BIOS-e820: [mem 0x000000000080c000-0x000000000080ffff] usable Mar 12 01:38:06.069372 kernel: BIOS-e820: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Mar 12 01:38:06.069385 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009c8eefff] usable Mar 12 01:38:06.069394 kernel: BIOS-e820: [mem 0x000000009c8ef000-0x000000009c9eefff] reserved Mar 12 01:38:06.069403 kernel: BIOS-e820: [mem 0x000000009c9ef000-0x000000009caeefff] type 20 Mar 12 01:38:06.069413 kernel: BIOS-e820: [mem 0x000000009caef000-0x000000009cb6efff] reserved Mar 12 01:38:06.069424 kernel: BIOS-e820: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Mar 12 01:38:06.069434 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Mar 12 01:38:06.069447 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Mar 12 01:38:06.069457 kernel: BIOS-e820: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Mar 12 01:38:06.069466 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Mar 12 01:38:06.069476 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Mar 12 01:38:06.069486 kernel: NX (Execute Disable) protection: active Mar 12 01:38:06.069496 kernel: APIC: Static calls initialized Mar 12 01:38:06.069505 kernel: efi: EFI v2.7 by EDK II Mar 12 01:38:06.069515 kernel: efi: SMBIOS=0x9c9ab000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b675198 Mar 12 01:38:06.069525 kernel: SMBIOS 2.8 present. Mar 12 01:38:06.069535 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015 Mar 12 01:38:06.069544 kernel: Hypervisor detected: KVM Mar 12 01:38:06.069557 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Mar 12 01:38:06.069567 kernel: kvm-clock: using sched offset of 6157435244 cycles Mar 12 01:38:06.069577 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Mar 12 01:38:06.069587 kernel: tsc: Detected 2445.424 MHz processor Mar 12 01:38:06.069597 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Mar 12 01:38:06.069608 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Mar 12 01:38:06.069618 kernel: last_pfn = 0x9cf40 max_arch_pfn = 0x400000000 Mar 12 01:38:06.069628 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Mar 12 01:38:06.069638 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Mar 12 01:38:06.069695 kernel: Using GB pages for direct mapping Mar 12 01:38:06.069705 kernel: Secure boot disabled Mar 12 01:38:06.069715 kernel: ACPI: Early table checksum verification disabled Mar 12 01:38:06.069726 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Mar 12 01:38:06.069741 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Mar 12 01:38:06.069752 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Mar 12 01:38:06.069762 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 12 01:38:06.069776 kernel: ACPI: FACS 0x000000009CBDD000 000040 Mar 12 01:38:06.069787 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 12 01:38:06.069798 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 12 01:38:06.069809 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 12 01:38:06.069819 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 12 01:38:06.069830 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Mar 12 01:38:06.069840 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Mar 12 01:38:06.069854 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] Mar 12 01:38:06.069865 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Mar 12 01:38:06.069875 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Mar 12 01:38:06.069886 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Mar 12 01:38:06.069896 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Mar 12 01:38:06.069907 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Mar 12 01:38:06.069917 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Mar 12 01:38:06.069928 kernel: No NUMA configuration found Mar 12 01:38:06.069938 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cf3ffff] Mar 12 01:38:06.069952 kernel: NODE_DATA(0) allocated [mem 0x9cea6000-0x9ceabfff] Mar 12 01:38:06.069962 kernel: Zone ranges: Mar 12 01:38:06.069973 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Mar 12 01:38:06.069983 kernel: DMA32 [mem 0x0000000001000000-0x000000009cf3ffff] Mar 12 01:38:06.069994 kernel: Normal empty Mar 12 01:38:06.070004 kernel: Movable zone start for each node Mar 12 01:38:06.070015 kernel: Early memory node ranges Mar 12 01:38:06.070025 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Mar 12 01:38:06.070035 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Mar 12 01:38:06.070046 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Mar 12 01:38:06.070059 kernel: node 0: [mem 0x000000000080c000-0x000000000080ffff] Mar 12 01:38:06.070070 kernel: node 0: [mem 0x0000000000900000-0x000000009c8eefff] Mar 12 01:38:06.070080 kernel: node 0: [mem 0x000000009cbff000-0x000000009cf3ffff] Mar 12 01:38:06.070091 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cf3ffff] Mar 12 01:38:06.070102 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 12 01:38:06.070112 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Mar 12 01:38:06.070123 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Mar 12 01:38:06.070133 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 12 01:38:06.070144 kernel: On node 0, zone DMA: 240 pages in unavailable ranges Mar 12 01:38:06.070157 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Mar 12 01:38:06.070168 kernel: On node 0, zone DMA32: 12480 pages in unavailable ranges Mar 12 01:38:06.070179 kernel: ACPI: PM-Timer IO Port: 0x608 Mar 12 01:38:06.070189 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Mar 12 01:38:06.070200 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Mar 12 01:38:06.070210 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Mar 12 01:38:06.070221 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Mar 12 01:38:06.070231 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Mar 12 01:38:06.070242 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Mar 12 01:38:06.070253 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Mar 12 01:38:06.070269 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Mar 12 01:38:06.070279 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Mar 12 01:38:06.070290 kernel: TSC deadline timer available Mar 12 01:38:06.070335 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Mar 12 01:38:06.070347 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Mar 12 01:38:06.070359 kernel: kvm-guest: KVM setup pv remote TLB flush Mar 12 01:38:06.070370 kernel: kvm-guest: setup PV sched yield Mar 12 01:38:06.070382 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Mar 12 01:38:06.070392 kernel: Booting paravirtualized kernel on KVM Mar 12 01:38:06.070409 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Mar 12 01:38:06.070419 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Mar 12 01:38:06.070430 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u524288 Mar 12 01:38:06.070442 kernel: pcpu-alloc: s196328 r8192 d28952 u524288 alloc=1*2097152 Mar 12 01:38:06.070452 kernel: pcpu-alloc: [0] 0 1 2 3 Mar 12 01:38:06.070464 kernel: kvm-guest: PV spinlocks enabled Mar 12 01:38:06.070474 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Mar 12 01:38:06.070485 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=0e4243d51ac00bffbb09a606c7378a821ca08f30dbebc6b82c4452fcc120d7bc Mar 12 01:38:06.070499 kernel: random: crng init done Mar 12 01:38:06.070509 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 12 01:38:06.070520 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 12 01:38:06.070529 kernel: Fallback order for Node 0: 0 Mar 12 01:38:06.070539 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629759 Mar 12 01:38:06.070550 kernel: Policy zone: DMA32 Mar 12 01:38:06.070561 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 12 01:38:06.070568 kernel: Memory: 2400616K/2567000K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42892K init, 2304K bss, 166124K reserved, 0K cma-reserved) Mar 12 01:38:06.070574 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Mar 12 01:38:06.070583 kernel: ftrace: allocating 37996 entries in 149 pages Mar 12 01:38:06.070589 kernel: ftrace: allocated 149 pages with 4 groups Mar 12 01:38:06.070596 kernel: Dynamic Preempt: voluntary Mar 12 01:38:06.070602 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 12 01:38:06.070620 kernel: rcu: RCU event tracing is enabled. Mar 12 01:38:06.070629 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Mar 12 01:38:06.070636 kernel: Trampoline variant of Tasks RCU enabled. Mar 12 01:38:06.070683 kernel: Rude variant of Tasks RCU enabled. Mar 12 01:38:06.070690 kernel: Tracing variant of Tasks RCU enabled. Mar 12 01:38:06.070697 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 12 01:38:06.070703 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Mar 12 01:38:06.070710 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Mar 12 01:38:06.070719 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 12 01:38:06.070726 kernel: Console: colour dummy device 80x25 Mar 12 01:38:06.070732 kernel: printk: console [ttyS0] enabled Mar 12 01:38:06.070739 kernel: ACPI: Core revision 20230628 Mar 12 01:38:06.070745 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Mar 12 01:38:06.070754 kernel: APIC: Switch to symmetric I/O mode setup Mar 12 01:38:06.070761 kernel: x2apic enabled Mar 12 01:38:06.070768 kernel: APIC: Switched APIC routing to: physical x2apic Mar 12 01:38:06.070774 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Mar 12 01:38:06.070781 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Mar 12 01:38:06.070787 kernel: kvm-guest: setup PV IPIs Mar 12 01:38:06.070794 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Mar 12 01:38:06.070800 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Mar 12 01:38:06.070807 kernel: Calibrating delay loop (skipped) preset value.. 4890.84 BogoMIPS (lpj=2445424) Mar 12 01:38:06.070816 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Mar 12 01:38:06.070822 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Mar 12 01:38:06.070828 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Mar 12 01:38:06.070835 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Mar 12 01:38:06.070841 kernel: Spectre V2 : Mitigation: Retpolines Mar 12 01:38:06.070848 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Mar 12 01:38:06.070854 kernel: Speculative Store Bypass: Vulnerable Mar 12 01:38:06.070861 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Mar 12 01:38:06.070868 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Mar 12 01:38:06.070877 kernel: active return thunk: srso_alias_return_thunk Mar 12 01:38:06.070883 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Mar 12 01:38:06.070889 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Mar 12 01:38:06.070896 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Mar 12 01:38:06.070902 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Mar 12 01:38:06.070909 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Mar 12 01:38:06.070915 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Mar 12 01:38:06.070922 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Mar 12 01:38:06.070930 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Mar 12 01:38:06.070937 kernel: Freeing SMP alternatives memory: 32K Mar 12 01:38:06.070943 kernel: pid_max: default: 32768 minimum: 301 Mar 12 01:38:06.070950 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Mar 12 01:38:06.070956 kernel: landlock: Up and running. Mar 12 01:38:06.070962 kernel: SELinux: Initializing. Mar 12 01:38:06.070969 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 12 01:38:06.070976 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 12 01:38:06.070982 kernel: smpboot: CPU0: AMD EPYC 7763 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Mar 12 01:38:06.070991 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 12 01:38:06.070997 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 12 01:38:06.071004 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 12 01:38:06.071011 kernel: Performance Events: PMU not available due to virtualization, using software events only. Mar 12 01:38:06.071017 kernel: signal: max sigframe size: 1776 Mar 12 01:38:06.071023 kernel: rcu: Hierarchical SRCU implementation. Mar 12 01:38:06.071030 kernel: rcu: Max phase no-delay instances is 400. Mar 12 01:38:06.071036 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Mar 12 01:38:06.071043 kernel: smp: Bringing up secondary CPUs ... Mar 12 01:38:06.071052 kernel: smpboot: x86: Booting SMP configuration: Mar 12 01:38:06.071058 kernel: .... node #0, CPUs: #1 #2 #3 Mar 12 01:38:06.071064 kernel: smp: Brought up 1 node, 4 CPUs Mar 12 01:38:06.071071 kernel: smpboot: Max logical packages: 1 Mar 12 01:38:06.071077 kernel: smpboot: Total of 4 processors activated (19563.39 BogoMIPS) Mar 12 01:38:06.071084 kernel: devtmpfs: initialized Mar 12 01:38:06.071090 kernel: x86/mm: Memory block size: 128MB Mar 12 01:38:06.071097 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Mar 12 01:38:06.071103 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Mar 12 01:38:06.071112 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00810000-0x008fffff] (983040 bytes) Mar 12 01:38:06.071118 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Mar 12 01:38:06.071125 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Mar 12 01:38:06.071132 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 12 01:38:06.071138 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Mar 12 01:38:06.071144 kernel: pinctrl core: initialized pinctrl subsystem Mar 12 01:38:06.071151 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 12 01:38:06.071157 kernel: audit: initializing netlink subsys (disabled) Mar 12 01:38:06.071164 kernel: audit: type=2000 audit(1773279484.296:1): state=initialized audit_enabled=0 res=1 Mar 12 01:38:06.071172 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 12 01:38:06.071179 kernel: thermal_sys: Registered thermal governor 'user_space' Mar 12 01:38:06.071185 kernel: cpuidle: using governor menu Mar 12 01:38:06.071192 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 12 01:38:06.071198 kernel: dca service started, version 1.12.1 Mar 12 01:38:06.071204 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Mar 12 01:38:06.071211 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Mar 12 01:38:06.071218 kernel: PCI: Using configuration type 1 for base access Mar 12 01:38:06.071224 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Mar 12 01:38:06.071233 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 12 01:38:06.071239 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Mar 12 01:38:06.071246 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 12 01:38:06.071252 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Mar 12 01:38:06.071259 kernel: ACPI: Added _OSI(Module Device) Mar 12 01:38:06.071265 kernel: ACPI: Added _OSI(Processor Device) Mar 12 01:38:06.071271 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 12 01:38:06.071278 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 12 01:38:06.071284 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Mar 12 01:38:06.071293 kernel: ACPI: Interpreter enabled Mar 12 01:38:06.071323 kernel: ACPI: PM: (supports S0 S3 S5) Mar 12 01:38:06.071330 kernel: ACPI: Using IOAPIC for interrupt routing Mar 12 01:38:06.071336 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Mar 12 01:38:06.071343 kernel: PCI: Using E820 reservations for host bridge windows Mar 12 01:38:06.071349 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Mar 12 01:38:06.071356 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Mar 12 01:38:06.071539 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Mar 12 01:38:06.071769 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Mar 12 01:38:06.071953 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Mar 12 01:38:06.071964 kernel: PCI host bridge to bus 0000:00 Mar 12 01:38:06.072091 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Mar 12 01:38:06.072205 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Mar 12 01:38:06.072358 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Mar 12 01:38:06.072474 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Mar 12 01:38:06.072591 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Mar 12 01:38:06.072820 kernel: pci_bus 0000:00: root bus resource [mem 0x800000000-0xfffffffff window] Mar 12 01:38:06.072936 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Mar 12 01:38:06.073073 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Mar 12 01:38:06.073202 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Mar 12 01:38:06.073357 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Mar 12 01:38:06.073487 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] Mar 12 01:38:06.073606 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Mar 12 01:38:06.073777 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb Mar 12 01:38:06.073900 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Mar 12 01:38:06.074030 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Mar 12 01:38:06.074151 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] Mar 12 01:38:06.074272 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] Mar 12 01:38:06.074431 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x800000000-0x800003fff 64bit pref] Mar 12 01:38:06.074564 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Mar 12 01:38:06.074736 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] Mar 12 01:38:06.074860 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Mar 12 01:38:06.074981 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x800004000-0x800007fff 64bit pref] Mar 12 01:38:06.075157 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Mar 12 01:38:06.075426 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] Mar 12 01:38:06.075555 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Mar 12 01:38:06.075731 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x800008000-0x80000bfff 64bit pref] Mar 12 01:38:06.075855 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Mar 12 01:38:06.075981 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Mar 12 01:38:06.076102 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Mar 12 01:38:06.076228 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Mar 12 01:38:06.076418 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] Mar 12 01:38:06.076539 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] Mar 12 01:38:06.076716 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Mar 12 01:38:06.076841 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] Mar 12 01:38:06.076850 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Mar 12 01:38:06.076863 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Mar 12 01:38:06.076875 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Mar 12 01:38:06.076887 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Mar 12 01:38:06.076902 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Mar 12 01:38:06.076915 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Mar 12 01:38:06.076927 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Mar 12 01:38:06.076938 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Mar 12 01:38:06.076949 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Mar 12 01:38:06.076961 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Mar 12 01:38:06.076972 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Mar 12 01:38:06.076986 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Mar 12 01:38:06.076997 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Mar 12 01:38:06.077012 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Mar 12 01:38:06.077023 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Mar 12 01:38:06.077034 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Mar 12 01:38:06.077045 kernel: iommu: Default domain type: Translated Mar 12 01:38:06.077056 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Mar 12 01:38:06.077063 kernel: efivars: Registered efivars operations Mar 12 01:38:06.077069 kernel: PCI: Using ACPI for IRQ routing Mar 12 01:38:06.077076 kernel: PCI: pci_cache_line_size set to 64 bytes Mar 12 01:38:06.077082 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Mar 12 01:38:06.077092 kernel: e820: reserve RAM buffer [mem 0x00810000-0x008fffff] Mar 12 01:38:06.077098 kernel: e820: reserve RAM buffer [mem 0x9c8ef000-0x9fffffff] Mar 12 01:38:06.077105 kernel: e820: reserve RAM buffer [mem 0x9cf40000-0x9fffffff] Mar 12 01:38:06.077241 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Mar 12 01:38:06.077451 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Mar 12 01:38:06.077615 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Mar 12 01:38:06.077626 kernel: vgaarb: loaded Mar 12 01:38:06.077633 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Mar 12 01:38:06.077722 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Mar 12 01:38:06.077750 kernel: clocksource: Switched to clocksource kvm-clock Mar 12 01:38:06.077761 kernel: VFS: Disk quotas dquot_6.6.0 Mar 12 01:38:06.077774 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 12 01:38:06.077786 kernel: pnp: PnP ACPI init Mar 12 01:38:06.077939 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Mar 12 01:38:06.077951 kernel: pnp: PnP ACPI: found 6 devices Mar 12 01:38:06.077958 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Mar 12 01:38:06.077965 kernel: NET: Registered PF_INET protocol family Mar 12 01:38:06.077975 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 12 01:38:06.077982 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 12 01:38:06.077988 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 12 01:38:06.077995 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 12 01:38:06.078002 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Mar 12 01:38:06.078008 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 12 01:38:06.078015 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 12 01:38:06.078021 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 12 01:38:06.078028 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 12 01:38:06.078037 kernel: NET: Registered PF_XDP protocol family Mar 12 01:38:06.078161 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Mar 12 01:38:06.078344 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Mar 12 01:38:06.078502 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Mar 12 01:38:06.078619 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Mar 12 01:38:06.078787 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Mar 12 01:38:06.078900 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Mar 12 01:38:06.079015 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Mar 12 01:38:06.079126 kernel: pci_bus 0000:00: resource 9 [mem 0x800000000-0xfffffffff window] Mar 12 01:38:06.079135 kernel: PCI: CLS 0 bytes, default 64 Mar 12 01:38:06.079142 kernel: Initialise system trusted keyrings Mar 12 01:38:06.079148 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 12 01:38:06.079155 kernel: Key type asymmetric registered Mar 12 01:38:06.079161 kernel: Asymmetric key parser 'x509' registered Mar 12 01:38:06.079168 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Mar 12 01:38:06.079175 kernel: io scheduler mq-deadline registered Mar 12 01:38:06.079185 kernel: io scheduler kyber registered Mar 12 01:38:06.079191 kernel: io scheduler bfq registered Mar 12 01:38:06.079198 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Mar 12 01:38:06.079205 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Mar 12 01:38:06.079211 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Mar 12 01:38:06.079218 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Mar 12 01:38:06.079225 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 12 01:38:06.079231 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Mar 12 01:38:06.079238 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Mar 12 01:38:06.079247 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Mar 12 01:38:06.079259 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Mar 12 01:38:06.079272 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Mar 12 01:38:06.079489 kernel: rtc_cmos 00:04: RTC can wake from S4 Mar 12 01:38:06.079611 kernel: rtc_cmos 00:04: registered as rtc0 Mar 12 01:38:06.079874 kernel: rtc_cmos 00:04: setting system clock to 2026-03-12T01:38:05 UTC (1773279485) Mar 12 01:38:06.080018 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Mar 12 01:38:06.080029 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Mar 12 01:38:06.080058 kernel: efifb: probing for efifb Mar 12 01:38:06.080066 kernel: efifb: framebuffer at 0xc0000000, using 1408k, total 1408k Mar 12 01:38:06.080087 kernel: efifb: mode is 800x600x24, linelength=2400, pages=1 Mar 12 01:38:06.080093 kernel: efifb: scrolling: redraw Mar 12 01:38:06.080100 kernel: efifb: Truecolor: size=0:8:8:8, shift=0:16:8:0 Mar 12 01:38:06.080120 kernel: Console: switching to colour frame buffer device 100x37 Mar 12 01:38:06.080150 kernel: fb0: EFI VGA frame buffer device Mar 12 01:38:06.080171 kernel: pstore: Using crash dump compression: deflate Mar 12 01:38:06.080191 kernel: pstore: Registered efi_pstore as persistent store backend Mar 12 01:38:06.080214 kernel: NET: Registered PF_INET6 protocol family Mar 12 01:38:06.080221 kernel: Segment Routing with IPv6 Mar 12 01:38:06.080228 kernel: In-situ OAM (IOAM) with IPv6 Mar 12 01:38:06.080248 kernel: NET: Registered PF_PACKET protocol family Mar 12 01:38:06.080269 kernel: Key type dns_resolver registered Mar 12 01:38:06.080289 kernel: IPI shorthand broadcast: enabled Mar 12 01:38:06.080359 kernel: sched_clock: Marking stable (922016165, 366191313)->(1628977459, -340769981) Mar 12 01:38:06.080383 kernel: registered taskstats version 1 Mar 12 01:38:06.080390 kernel: Loading compiled-in X.509 certificates Mar 12 01:38:06.080397 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: 67287262975845098ef9f337a0e8baa9afd38510' Mar 12 01:38:06.080420 kernel: Key type .fscrypt registered Mar 12 01:38:06.080440 kernel: Key type fscrypt-provisioning registered Mar 12 01:38:06.080460 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 12 01:38:06.080480 kernel: ima: Allocated hash algorithm: sha1 Mar 12 01:38:06.080499 kernel: ima: No architecture policies found Mar 12 01:38:06.080506 kernel: clk: Disabling unused clocks Mar 12 01:38:06.080513 kernel: Freeing unused kernel image (initmem) memory: 42892K Mar 12 01:38:06.080524 kernel: Write protecting the kernel read-only data: 36864k Mar 12 01:38:06.080542 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Mar 12 01:38:06.080552 kernel: Run /init as init process Mar 12 01:38:06.080566 kernel: with arguments: Mar 12 01:38:06.080578 kernel: /init Mar 12 01:38:06.080590 kernel: with environment: Mar 12 01:38:06.080601 kernel: HOME=/ Mar 12 01:38:06.080613 kernel: TERM=linux Mar 12 01:38:06.080630 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 12 01:38:06.080696 systemd[1]: Detected virtualization kvm. Mar 12 01:38:06.080704 systemd[1]: Detected architecture x86-64. Mar 12 01:38:06.080711 systemd[1]: Running in initrd. Mar 12 01:38:06.080718 systemd[1]: No hostname configured, using default hostname. Mar 12 01:38:06.080725 systemd[1]: Hostname set to . Mar 12 01:38:06.080732 systemd[1]: Initializing machine ID from VM UUID. Mar 12 01:38:06.080740 systemd[1]: Queued start job for default target initrd.target. Mar 12 01:38:06.080747 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 12 01:38:06.080757 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 12 01:38:06.080765 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 12 01:38:06.080773 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 12 01:38:06.080781 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 12 01:38:06.080791 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 12 01:38:06.080802 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 12 01:38:06.080810 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 12 01:38:06.080817 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 12 01:38:06.080824 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 12 01:38:06.080831 systemd[1]: Reached target paths.target - Path Units. Mar 12 01:38:06.080838 systemd[1]: Reached target slices.target - Slice Units. Mar 12 01:38:06.080862 systemd[1]: Reached target swap.target - Swaps. Mar 12 01:38:06.080875 systemd[1]: Reached target timers.target - Timer Units. Mar 12 01:38:06.080883 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 12 01:38:06.080890 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 12 01:38:06.080897 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 12 01:38:06.080904 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Mar 12 01:38:06.080911 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 12 01:38:06.080919 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 12 01:38:06.080926 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 12 01:38:06.080936 systemd[1]: Reached target sockets.target - Socket Units. Mar 12 01:38:06.080943 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 12 01:38:06.080950 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 12 01:38:06.080957 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 12 01:38:06.080964 systemd[1]: Starting systemd-fsck-usr.service... Mar 12 01:38:06.080971 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 12 01:38:06.080979 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 12 01:38:06.080986 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 12 01:38:06.080993 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 12 01:38:06.081024 systemd-journald[193]: Collecting audit messages is disabled. Mar 12 01:38:06.081040 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 12 01:38:06.081048 systemd[1]: Finished systemd-fsck-usr.service. Mar 12 01:38:06.081059 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 12 01:38:06.081066 systemd-journald[193]: Journal started Mar 12 01:38:06.081081 systemd-journald[193]: Runtime Journal (/run/log/journal/297d69531d8340cda042272f5727ac8b) is 6.0M, max 48.3M, 42.2M free. Mar 12 01:38:06.083366 systemd-modules-load[194]: Inserted module 'overlay' Mar 12 01:38:06.094695 systemd[1]: Started systemd-journald.service - Journal Service. Mar 12 01:38:06.110057 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 12 01:38:06.111870 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 12 01:38:06.118818 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 12 01:38:06.134003 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 12 01:38:06.140901 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 12 01:38:06.142911 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 12 01:38:06.143219 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 12 01:38:06.167961 systemd-modules-load[194]: Inserted module 'br_netfilter' Mar 12 01:38:06.169705 kernel: Bridge firewalling registered Mar 12 01:38:06.169827 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 12 01:38:06.171414 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 12 01:38:06.175899 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 12 01:38:06.202992 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 12 01:38:06.223001 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 12 01:38:06.230194 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 12 01:38:06.239336 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 12 01:38:06.259139 dracut-cmdline[231]: dracut-dracut-053 Mar 12 01:38:06.263517 systemd-resolved[227]: Positive Trust Anchors: Mar 12 01:38:06.263534 systemd-resolved[227]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 12 01:38:06.281611 dracut-cmdline[231]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=0e4243d51ac00bffbb09a606c7378a821ca08f30dbebc6b82c4452fcc120d7bc Mar 12 01:38:06.263561 systemd-resolved[227]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 12 01:38:06.265978 systemd-resolved[227]: Defaulting to hostname 'linux'. Mar 12 01:38:06.267218 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 12 01:38:06.272150 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 12 01:38:06.358769 kernel: SCSI subsystem initialized Mar 12 01:38:06.368776 kernel: Loading iSCSI transport class v2.0-870. Mar 12 01:38:06.382758 kernel: iscsi: registered transport (tcp) Mar 12 01:38:06.407943 kernel: iscsi: registered transport (qla4xxx) Mar 12 01:38:06.408027 kernel: QLogic iSCSI HBA Driver Mar 12 01:38:06.462505 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 12 01:38:06.473896 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 12 01:38:06.507035 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 12 01:38:06.507137 kernel: device-mapper: uevent: version 1.0.3 Mar 12 01:38:06.510550 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Mar 12 01:38:06.558743 kernel: raid6: avx2x4 gen() 29188 MB/s Mar 12 01:38:06.576755 kernel: raid6: avx2x2 gen() 26750 MB/s Mar 12 01:38:06.595904 kernel: raid6: avx2x1 gen() 21774 MB/s Mar 12 01:38:06.595987 kernel: raid6: using algorithm avx2x4 gen() 29188 MB/s Mar 12 01:38:06.616016 kernel: raid6: .... xor() 4858 MB/s, rmw enabled Mar 12 01:38:06.616069 kernel: raid6: using avx2x2 recovery algorithm Mar 12 01:38:06.636718 kernel: xor: automatically using best checksumming function avx Mar 12 01:38:06.779711 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 12 01:38:06.793939 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 12 01:38:06.815856 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 12 01:38:06.828138 systemd-udevd[413]: Using default interface naming scheme 'v255'. Mar 12 01:38:06.832774 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 12 01:38:06.839852 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 12 01:38:06.858445 dracut-pre-trigger[421]: rd.md=0: removing MD RAID activation Mar 12 01:38:06.893582 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 12 01:38:06.909807 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 12 01:38:06.984007 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 12 01:38:06.995882 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 12 01:38:07.010210 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 12 01:38:07.016479 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 12 01:38:07.020609 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 12 01:38:07.025506 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 12 01:38:07.044691 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Mar 12 01:38:07.043560 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 12 01:38:07.055277 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Mar 12 01:38:07.064825 kernel: cryptd: max_cpu_qlen set to 1000 Mar 12 01:38:07.064841 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 12 01:38:07.064851 kernel: GPT:9289727 != 19775487 Mar 12 01:38:07.064860 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 12 01:38:07.064869 kernel: GPT:9289727 != 19775487 Mar 12 01:38:07.064878 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 12 01:38:07.064887 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 12 01:38:07.063972 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 12 01:38:07.064032 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 12 01:38:07.081011 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 12 01:38:07.081133 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 12 01:38:07.081233 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 12 01:38:07.092889 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 12 01:38:07.106040 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 12 01:38:07.113772 kernel: libata version 3.00 loaded. Mar 12 01:38:07.114117 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 12 01:38:07.123630 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 12 01:38:07.133431 kernel: ahci 0000:00:1f.2: version 3.0 Mar 12 01:38:07.133627 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Mar 12 01:38:07.133683 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Mar 12 01:38:07.133841 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Mar 12 01:38:07.125941 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 12 01:38:07.140745 kernel: scsi host0: ahci Mar 12 01:38:07.143701 kernel: scsi host1: ahci Mar 12 01:38:07.143908 kernel: AVX2 version of gcm_enc/dec engaged. Mar 12 01:38:07.145732 kernel: scsi host2: ahci Mar 12 01:38:07.149684 kernel: AES CTR mode by8 optimization enabled Mar 12 01:38:07.149706 kernel: scsi host3: ahci Mar 12 01:38:07.151287 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 12 01:38:07.180618 kernel: scsi host4: ahci Mar 12 01:38:07.180929 kernel: scsi host5: ahci Mar 12 01:38:07.181377 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 Mar 12 01:38:07.181405 kernel: BTRFS: device fsid 94537345-7f6b-4b2a-965f-248bd6f0b7eb devid 1 transid 33 /dev/vda3 scanned by (udev-worker) (473) Mar 12 01:38:07.181423 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 Mar 12 01:38:07.181442 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (470) Mar 12 01:38:07.181458 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 Mar 12 01:38:07.181475 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 Mar 12 01:38:07.181499 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 Mar 12 01:38:07.181517 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 Mar 12 01:38:07.195008 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Mar 12 01:38:07.202351 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 12 01:38:07.213819 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Mar 12 01:38:07.217811 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 12 01:38:07.229581 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Mar 12 01:38:07.233265 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Mar 12 01:38:07.258887 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 12 01:38:07.262927 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 12 01:38:07.275075 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 12 01:38:07.275096 disk-uuid[559]: Primary Header is updated. Mar 12 01:38:07.275096 disk-uuid[559]: Secondary Entries is updated. Mar 12 01:38:07.275096 disk-uuid[559]: Secondary Header is updated. Mar 12 01:38:07.283271 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 12 01:38:07.287095 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 12 01:38:07.501162 kernel: ata6: SATA link down (SStatus 0 SControl 300) Mar 12 01:38:07.501233 kernel: ata5: SATA link down (SStatus 0 SControl 300) Mar 12 01:38:07.501244 kernel: ata1: SATA link down (SStatus 0 SControl 300) Mar 12 01:38:07.501697 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Mar 12 01:38:07.505728 kernel: ata4: SATA link down (SStatus 0 SControl 300) Mar 12 01:38:07.507765 kernel: ata2: SATA link down (SStatus 0 SControl 300) Mar 12 01:38:07.510751 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Mar 12 01:38:07.514901 kernel: ata3.00: applying bridge limits Mar 12 01:38:07.515099 kernel: ata3.00: configured for UDMA/100 Mar 12 01:38:07.521715 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Mar 12 01:38:07.576100 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Mar 12 01:38:07.576406 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Mar 12 01:38:07.588754 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Mar 12 01:38:08.280730 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 12 01:38:08.281066 disk-uuid[561]: The operation has completed successfully. Mar 12 01:38:08.321918 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 12 01:38:08.322111 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 12 01:38:08.358877 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 12 01:38:08.369008 sh[596]: Success Mar 12 01:38:08.386690 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Mar 12 01:38:08.437548 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 12 01:38:08.461572 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 12 01:38:08.467205 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 12 01:38:08.490497 kernel: BTRFS info (device dm-0): first mount of filesystem 94537345-7f6b-4b2a-965f-248bd6f0b7eb Mar 12 01:38:08.490552 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Mar 12 01:38:08.490564 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Mar 12 01:38:08.497976 kernel: BTRFS info (device dm-0): disabling log replay at mount time Mar 12 01:38:08.498008 kernel: BTRFS info (device dm-0): using free space tree Mar 12 01:38:08.510853 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 12 01:38:08.511697 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 12 01:38:08.523838 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 12 01:38:08.530951 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 12 01:38:08.549506 kernel: BTRFS info (device vda6): first mount of filesystem 0ebf6eb2-dc55-4706-86d1-78d37843d203 Mar 12 01:38:08.549540 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 12 01:38:08.549558 kernel: BTRFS info (device vda6): using free space tree Mar 12 01:38:08.549576 kernel: BTRFS info (device vda6): auto enabling async discard Mar 12 01:38:08.563176 systemd[1]: mnt-oem.mount: Deactivated successfully. Mar 12 01:38:08.568361 kernel: BTRFS info (device vda6): last unmount of filesystem 0ebf6eb2-dc55-4706-86d1-78d37843d203 Mar 12 01:38:08.579271 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 12 01:38:08.590872 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 12 01:38:08.665806 ignition[666]: Ignition 2.19.0 Mar 12 01:38:08.665848 ignition[666]: Stage: fetch-offline Mar 12 01:38:08.665905 ignition[666]: no configs at "/usr/lib/ignition/base.d" Mar 12 01:38:08.665923 ignition[666]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 12 01:38:08.666046 ignition[666]: parsed url from cmdline: "" Mar 12 01:38:08.666052 ignition[666]: no config URL provided Mar 12 01:38:08.666062 ignition[666]: reading system config file "/usr/lib/ignition/user.ign" Mar 12 01:38:08.666076 ignition[666]: no config at "/usr/lib/ignition/user.ign" Mar 12 01:38:08.666115 ignition[666]: op(1): [started] loading QEMU firmware config module Mar 12 01:38:08.666125 ignition[666]: op(1): executing: "modprobe" "qemu_fw_cfg" Mar 12 01:38:08.676596 ignition[666]: op(1): [finished] loading QEMU firmware config module Mar 12 01:38:08.719028 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 12 01:38:08.734957 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 12 01:38:08.773212 systemd-networkd[785]: lo: Link UP Mar 12 01:38:08.773249 systemd-networkd[785]: lo: Gained carrier Mar 12 01:38:08.775254 systemd-networkd[785]: Enumeration completed Mar 12 01:38:08.776281 systemd-networkd[785]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 12 01:38:08.776285 systemd-networkd[785]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 12 01:38:08.777577 systemd-networkd[785]: eth0: Link UP Mar 12 01:38:08.777582 systemd-networkd[785]: eth0: Gained carrier Mar 12 01:38:08.777589 systemd-networkd[785]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 12 01:38:08.777809 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 12 01:38:08.779456 systemd[1]: Reached target network.target - Network. Mar 12 01:38:08.833761 systemd-networkd[785]: eth0: DHCPv4 address 10.0.0.156/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 12 01:38:08.929361 ignition[666]: parsing config with SHA512: c0731e8a36904275e525957bc000f336e9bcbbd872adf94e036dac4eb01887014366c9c2ed580e15dce1f41dd91610191fc24e74b319ac2a8a4297c2af9098b8 Mar 12 01:38:08.933410 unknown[666]: fetched base config from "system" Mar 12 01:38:08.933445 unknown[666]: fetched user config from "qemu" Mar 12 01:38:08.933904 ignition[666]: fetch-offline: fetch-offline passed Mar 12 01:38:08.933977 ignition[666]: Ignition finished successfully Mar 12 01:38:08.943875 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 12 01:38:08.944231 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Mar 12 01:38:08.966890 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 12 01:38:08.992605 ignition[789]: Ignition 2.19.0 Mar 12 01:38:08.992630 ignition[789]: Stage: kargs Mar 12 01:38:08.992911 ignition[789]: no configs at "/usr/lib/ignition/base.d" Mar 12 01:38:08.992932 ignition[789]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 12 01:38:09.002258 ignition[789]: kargs: kargs passed Mar 12 01:38:09.002372 ignition[789]: Ignition finished successfully Mar 12 01:38:09.009151 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 12 01:38:09.019910 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 12 01:38:09.035918 ignition[796]: Ignition 2.19.0 Mar 12 01:38:09.035943 ignition[796]: Stage: disks Mar 12 01:38:09.036094 ignition[796]: no configs at "/usr/lib/ignition/base.d" Mar 12 01:38:09.036107 ignition[796]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 12 01:38:09.045006 ignition[796]: disks: disks passed Mar 12 01:38:09.045078 ignition[796]: Ignition finished successfully Mar 12 01:38:09.050598 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 12 01:38:09.051595 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 12 01:38:09.061555 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 12 01:38:09.062192 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 12 01:38:09.073379 systemd[1]: Reached target sysinit.target - System Initialization. Mar 12 01:38:09.073527 systemd[1]: Reached target basic.target - Basic System. Mar 12 01:38:09.096968 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 12 01:38:09.118634 systemd-fsck[806]: ROOT: clean, 14/553520 files, 52654/553472 blocks Mar 12 01:38:09.124097 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 12 01:38:09.133931 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 12 01:38:09.235732 kernel: EXT4-fs (vda9): mounted filesystem f90926b1-4cc2-4a2d-8c45-4ec584c98779 r/w with ordered data mode. Quota mode: none. Mar 12 01:38:09.236426 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 12 01:38:09.239373 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 12 01:38:09.253775 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 12 01:38:09.272372 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (814) Mar 12 01:38:09.272408 kernel: BTRFS info (device vda6): first mount of filesystem 0ebf6eb2-dc55-4706-86d1-78d37843d203 Mar 12 01:38:09.272429 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 12 01:38:09.272449 kernel: BTRFS info (device vda6): using free space tree Mar 12 01:38:09.257563 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 12 01:38:09.272719 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Mar 12 01:38:09.288453 kernel: BTRFS info (device vda6): auto enabling async discard Mar 12 01:38:09.272790 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 12 01:38:09.272818 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 12 01:38:09.282637 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 12 01:38:09.288572 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 12 01:38:09.316885 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 12 01:38:09.358528 initrd-setup-root[838]: cut: /sysroot/etc/passwd: No such file or directory Mar 12 01:38:09.364495 initrd-setup-root[845]: cut: /sysroot/etc/group: No such file or directory Mar 12 01:38:09.369961 initrd-setup-root[852]: cut: /sysroot/etc/shadow: No such file or directory Mar 12 01:38:09.374478 initrd-setup-root[859]: cut: /sysroot/etc/gshadow: No such file or directory Mar 12 01:38:09.476227 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 12 01:38:09.490837 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 12 01:38:09.494606 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 12 01:38:09.504367 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 12 01:38:09.512291 kernel: BTRFS info (device vda6): last unmount of filesystem 0ebf6eb2-dc55-4706-86d1-78d37843d203 Mar 12 01:38:09.531295 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 12 01:38:09.547169 ignition[928]: INFO : Ignition 2.19.0 Mar 12 01:38:09.547169 ignition[928]: INFO : Stage: mount Mar 12 01:38:09.551416 ignition[928]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 12 01:38:09.551416 ignition[928]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 12 01:38:09.557711 ignition[928]: INFO : mount: mount passed Mar 12 01:38:09.559695 ignition[928]: INFO : Ignition finished successfully Mar 12 01:38:09.563498 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 12 01:38:09.574871 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 12 01:38:09.582066 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 12 01:38:09.599150 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (940) Mar 12 01:38:09.599190 kernel: BTRFS info (device vda6): first mount of filesystem 0ebf6eb2-dc55-4706-86d1-78d37843d203 Mar 12 01:38:09.599201 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 12 01:38:09.601558 kernel: BTRFS info (device vda6): using free space tree Mar 12 01:38:09.608693 kernel: BTRFS info (device vda6): auto enabling async discard Mar 12 01:38:09.610560 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 12 01:38:09.639164 ignition[957]: INFO : Ignition 2.19.0 Mar 12 01:38:09.639164 ignition[957]: INFO : Stage: files Mar 12 01:38:09.644504 ignition[957]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 12 01:38:09.644504 ignition[957]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 12 01:38:09.644504 ignition[957]: DEBUG : files: compiled without relabeling support, skipping Mar 12 01:38:09.644504 ignition[957]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 12 01:38:09.644504 ignition[957]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 12 01:38:09.644504 ignition[957]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 12 01:38:09.665206 ignition[957]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 12 01:38:09.665206 ignition[957]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 12 01:38:09.665206 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 12 01:38:09.665206 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Mar 12 01:38:09.645398 unknown[957]: wrote ssh authorized keys file for user: core Mar 12 01:38:09.692438 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 12 01:38:09.811617 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 12 01:38:09.811617 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Mar 12 01:38:09.822679 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Mar 12 01:38:09.822679 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 12 01:38:09.822679 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 12 01:38:09.822679 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 12 01:38:09.822679 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 12 01:38:09.822679 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 12 01:38:09.822679 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 12 01:38:09.822679 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 12 01:38:09.822679 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 12 01:38:09.822679 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Mar 12 01:38:09.822679 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Mar 12 01:38:09.822679 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Mar 12 01:38:09.822679 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.35.1-x86-64.raw: attempt #1 Mar 12 01:38:10.007192 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: Bad Gateway Mar 12 01:38:10.013864 systemd-networkd[785]: eth0: Gained IPv6LL Mar 12 01:38:10.207926 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.35.1-x86-64.raw: attempt #2 Mar 12 01:38:10.352074 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: Bad Gateway Mar 12 01:38:10.753117 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.35.1-x86-64.raw: attempt #3 Mar 12 01:38:11.024944 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Mar 12 01:38:11.601979 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Mar 12 01:38:11.601979 ignition[957]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Mar 12 01:38:11.611461 ignition[957]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 12 01:38:11.617359 ignition[957]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 12 01:38:11.617359 ignition[957]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Mar 12 01:38:11.617359 ignition[957]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Mar 12 01:38:11.617359 ignition[957]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 12 01:38:11.617359 ignition[957]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 12 01:38:11.617359 ignition[957]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Mar 12 01:38:11.617359 ignition[957]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Mar 12 01:38:11.656678 ignition[957]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Mar 12 01:38:11.660635 ignition[957]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Mar 12 01:38:11.660635 ignition[957]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Mar 12 01:38:11.660635 ignition[957]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Mar 12 01:38:11.660635 ignition[957]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Mar 12 01:38:11.660635 ignition[957]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 12 01:38:11.660635 ignition[957]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 12 01:38:11.660635 ignition[957]: INFO : files: files passed Mar 12 01:38:11.660635 ignition[957]: INFO : Ignition finished successfully Mar 12 01:38:11.660610 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 12 01:38:11.685017 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 12 01:38:11.690457 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 12 01:38:11.694768 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 12 01:38:11.711063 initrd-setup-root-after-ignition[984]: grep: /sysroot/oem/oem-release: No such file or directory Mar 12 01:38:11.694917 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 12 01:38:11.723035 initrd-setup-root-after-ignition[987]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 12 01:38:11.723035 initrd-setup-root-after-ignition[987]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 12 01:38:11.714357 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 12 01:38:11.734420 initrd-setup-root-after-ignition[991]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 12 01:38:11.718148 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 12 01:38:11.742005 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 12 01:38:11.775254 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 12 01:38:11.775466 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 12 01:38:11.780976 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 12 01:38:11.786192 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 12 01:38:11.791055 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 12 01:38:11.807029 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 12 01:38:11.823027 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 12 01:38:11.844008 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 12 01:38:11.860757 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 12 01:38:11.866505 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 12 01:38:11.872416 systemd[1]: Stopped target timers.target - Timer Units. Mar 12 01:38:11.877021 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 12 01:38:11.879443 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 12 01:38:11.885719 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 12 01:38:11.890772 systemd[1]: Stopped target basic.target - Basic System. Mar 12 01:38:11.895170 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 12 01:38:11.900414 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 12 01:38:11.905978 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 12 01:38:11.911387 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 12 01:38:11.916409 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 12 01:38:11.922411 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 12 01:38:11.927882 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 12 01:38:11.932966 systemd[1]: Stopped target swap.target - Swaps. Mar 12 01:38:11.936999 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 12 01:38:11.939621 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 12 01:38:11.947346 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 12 01:38:11.960130 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 12 01:38:11.967873 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 12 01:38:11.971734 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 12 01:38:11.981162 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 12 01:38:11.984362 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 12 01:38:11.991943 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 12 01:38:11.995145 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 12 01:38:12.003913 systemd[1]: Stopped target paths.target - Path Units. Mar 12 01:38:12.010002 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 12 01:38:12.013739 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 12 01:38:12.023287 systemd[1]: Stopped target slices.target - Slice Units. Mar 12 01:38:12.029337 systemd[1]: Stopped target sockets.target - Socket Units. Mar 12 01:38:12.034502 systemd[1]: iscsid.socket: Deactivated successfully. Mar 12 01:38:12.037042 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 12 01:38:12.043784 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 12 01:38:12.046730 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 12 01:38:12.058556 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 12 01:38:12.062078 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 12 01:38:12.068970 systemd[1]: ignition-files.service: Deactivated successfully. Mar 12 01:38:12.071489 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 12 01:38:12.094059 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 12 01:38:12.099881 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 12 01:38:12.104403 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 12 01:38:12.104570 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 12 01:38:12.112765 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 12 01:38:12.115288 ignition[1012]: INFO : Ignition 2.19.0 Mar 12 01:38:12.115288 ignition[1012]: INFO : Stage: umount Mar 12 01:38:12.115288 ignition[1012]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 12 01:38:12.115288 ignition[1012]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 12 01:38:12.127474 ignition[1012]: INFO : umount: umount passed Mar 12 01:38:12.127474 ignition[1012]: INFO : Ignition finished successfully Mar 12 01:38:12.115370 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 12 01:38:12.137611 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 12 01:38:12.140826 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 12 01:38:12.143467 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 12 01:38:12.151145 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 12 01:38:12.155461 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 12 01:38:12.162766 systemd[1]: Stopped target network.target - Network. Mar 12 01:38:12.166968 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 12 01:38:12.167064 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 12 01:38:12.173874 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 12 01:38:12.173975 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 12 01:38:12.178460 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 12 01:38:12.178521 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 12 01:38:12.186701 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 12 01:38:12.186768 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 12 01:38:12.192127 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 12 01:38:12.194695 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 12 01:38:12.206995 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 12 01:38:12.207221 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 12 01:38:12.210238 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 12 01:38:12.210408 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 12 01:38:12.214821 systemd-networkd[785]: eth0: DHCPv6 lease lost Mar 12 01:38:12.219377 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 12 01:38:12.219534 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 12 01:38:12.224226 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 12 01:38:12.224459 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 12 01:38:12.231986 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 12 01:38:12.232052 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 12 01:38:12.246989 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 12 01:38:12.249210 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 12 01:38:12.249375 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 12 01:38:12.256904 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 12 01:38:12.256980 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 12 01:38:12.262053 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 12 01:38:12.262111 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 12 01:38:12.267829 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 12 01:38:12.267884 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 12 01:38:12.273193 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 12 01:38:12.287266 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 12 01:38:12.287577 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 12 01:38:12.292549 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 12 01:38:12.292718 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 12 01:38:12.298481 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 12 01:38:12.298545 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 12 01:38:12.302392 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 12 01:38:12.302437 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 12 01:38:12.305103 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 12 01:38:12.305157 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 12 01:38:12.310790 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 12 01:38:12.310844 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 12 01:38:12.315824 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 12 01:38:12.315904 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 12 01:38:12.332952 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 12 01:38:12.336198 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 12 01:38:12.336284 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 12 01:38:12.342245 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Mar 12 01:38:12.342373 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 12 01:38:12.347585 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 12 01:38:12.347714 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 12 01:38:12.353463 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 12 01:38:12.353546 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 12 01:38:12.361444 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 12 01:38:12.361632 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 12 01:38:12.370962 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 12 01:38:12.399935 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 12 01:38:12.412603 systemd[1]: Switching root. Mar 12 01:38:12.438922 systemd-journald[193]: Journal stopped Mar 12 01:38:13.667214 systemd-journald[193]: Received SIGTERM from PID 1 (systemd). Mar 12 01:38:13.667342 kernel: SELinux: policy capability network_peer_controls=1 Mar 12 01:38:13.667362 kernel: SELinux: policy capability open_perms=1 Mar 12 01:38:13.667375 kernel: SELinux: policy capability extended_socket_class=1 Mar 12 01:38:13.667385 kernel: SELinux: policy capability always_check_network=0 Mar 12 01:38:13.667395 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 12 01:38:13.667405 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 12 01:38:13.667415 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 12 01:38:13.667425 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 12 01:38:13.667438 kernel: audit: type=1403 audit(1773279492.602:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 12 01:38:13.667450 systemd[1]: Successfully loaded SELinux policy in 51.326ms. Mar 12 01:38:13.667478 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 13.114ms. Mar 12 01:38:13.667492 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 12 01:38:13.667503 systemd[1]: Detected virtualization kvm. Mar 12 01:38:13.667514 systemd[1]: Detected architecture x86-64. Mar 12 01:38:13.667526 systemd[1]: Detected first boot. Mar 12 01:38:13.667536 systemd[1]: Initializing machine ID from VM UUID. Mar 12 01:38:13.667547 zram_generator::config[1055]: No configuration found. Mar 12 01:38:13.667559 systemd[1]: Populated /etc with preset unit settings. Mar 12 01:38:13.667570 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 12 01:38:13.667583 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Mar 12 01:38:13.667593 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 12 01:38:13.667604 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 12 01:38:13.667615 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 12 01:38:13.667626 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 12 01:38:13.667637 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 12 01:38:13.667686 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 12 01:38:13.667698 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 12 01:38:13.667712 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 12 01:38:13.667723 systemd[1]: Created slice user.slice - User and Session Slice. Mar 12 01:38:13.667734 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 12 01:38:13.667745 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 12 01:38:13.667756 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 12 01:38:13.667766 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 12 01:38:13.667777 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 12 01:38:13.667788 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 12 01:38:13.667800 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Mar 12 01:38:13.667813 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 12 01:38:13.667823 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Mar 12 01:38:13.667839 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Mar 12 01:38:13.667850 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Mar 12 01:38:13.667861 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 12 01:38:13.667871 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 12 01:38:13.667886 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 12 01:38:13.667897 systemd[1]: Reached target slices.target - Slice Units. Mar 12 01:38:13.667910 systemd[1]: Reached target swap.target - Swaps. Mar 12 01:38:13.667920 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 12 01:38:13.667931 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 12 01:38:13.667942 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 12 01:38:13.667953 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 12 01:38:13.667963 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 12 01:38:13.667975 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 12 01:38:13.667985 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 12 01:38:13.667996 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 12 01:38:13.668009 systemd[1]: Mounting media.mount - External Media Directory... Mar 12 01:38:13.668020 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 12 01:38:13.668030 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 12 01:38:13.668042 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 12 01:38:13.668057 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 12 01:38:13.668068 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 12 01:38:13.668079 systemd[1]: Reached target machines.target - Containers. Mar 12 01:38:13.668090 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 12 01:38:13.668101 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 12 01:38:13.668114 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 12 01:38:13.668125 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 12 01:38:13.668136 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 12 01:38:13.668146 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 12 01:38:13.668157 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 12 01:38:13.668167 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 12 01:38:13.668178 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 12 01:38:13.668188 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 12 01:38:13.668202 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 12 01:38:13.668212 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Mar 12 01:38:13.668223 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 12 01:38:13.668233 systemd[1]: Stopped systemd-fsck-usr.service. Mar 12 01:38:13.668244 kernel: ACPI: bus type drm_connector registered Mar 12 01:38:13.668254 kernel: fuse: init (API version 7.39) Mar 12 01:38:13.668264 kernel: loop: module loaded Mar 12 01:38:13.668275 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 12 01:38:13.668286 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 12 01:38:13.668329 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 12 01:38:13.668340 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 12 01:38:13.668351 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 12 01:38:13.668362 systemd[1]: verity-setup.service: Deactivated successfully. Mar 12 01:38:13.668393 systemd-journald[1139]: Collecting audit messages is disabled. Mar 12 01:38:13.668414 systemd[1]: Stopped verity-setup.service. Mar 12 01:38:13.668425 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 12 01:38:13.668437 systemd-journald[1139]: Journal started Mar 12 01:38:13.668459 systemd-journald[1139]: Runtime Journal (/run/log/journal/297d69531d8340cda042272f5727ac8b) is 6.0M, max 48.3M, 42.2M free. Mar 12 01:38:13.242847 systemd[1]: Queued start job for default target multi-user.target. Mar 12 01:38:13.262457 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Mar 12 01:38:13.263210 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 12 01:38:13.263698 systemd[1]: systemd-journald.service: Consumed 1.269s CPU time. Mar 12 01:38:13.678934 systemd[1]: Started systemd-journald.service - Journal Service. Mar 12 01:38:13.680014 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 12 01:38:13.682638 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 12 01:38:13.685431 systemd[1]: Mounted media.mount - External Media Directory. Mar 12 01:38:13.687933 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 12 01:38:13.690695 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 12 01:38:13.693701 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 12 01:38:13.696510 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 12 01:38:13.699989 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 12 01:38:13.703265 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 12 01:38:13.703508 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 12 01:38:13.707045 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 12 01:38:13.707360 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 12 01:38:13.710922 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 12 01:38:13.711120 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 12 01:38:13.714239 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 12 01:38:13.714487 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 12 01:38:13.718029 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 12 01:38:13.718294 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 12 01:38:13.721514 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 12 01:38:13.721828 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 12 01:38:13.724976 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 12 01:38:13.728254 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 12 01:38:13.732063 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 12 01:38:13.746841 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 12 01:38:13.761793 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 12 01:38:13.766567 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 12 01:38:13.770152 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 12 01:38:13.770200 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 12 01:38:13.775712 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Mar 12 01:38:13.782567 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 12 01:38:13.788104 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 12 01:38:13.792507 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 12 01:38:13.794254 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 12 01:38:13.800339 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 12 01:38:13.804389 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 12 01:38:13.807744 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 12 01:38:13.811987 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 12 01:38:13.815396 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 12 01:38:13.832443 systemd-journald[1139]: Time spent on flushing to /var/log/journal/297d69531d8340cda042272f5727ac8b is 28.184ms for 988 entries. Mar 12 01:38:13.832443 systemd-journald[1139]: System Journal (/var/log/journal/297d69531d8340cda042272f5727ac8b) is 8.0M, max 195.6M, 187.6M free. Mar 12 01:38:13.897053 systemd-journald[1139]: Received client request to flush runtime journal. Mar 12 01:38:13.897108 kernel: loop0: detected capacity change from 0 to 140768 Mar 12 01:38:13.821145 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 12 01:38:13.827463 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 12 01:38:13.839460 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 12 01:38:13.849496 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 12 01:38:13.860593 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 12 01:38:13.868772 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 12 01:38:13.883186 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 12 01:38:13.889129 systemd-tmpfiles[1170]: ACLs are not supported, ignoring. Mar 12 01:38:13.889143 systemd-tmpfiles[1170]: ACLs are not supported, ignoring. Mar 12 01:38:13.889757 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 12 01:38:13.896264 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 12 01:38:13.903010 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 12 01:38:13.913871 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 12 01:38:13.915158 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 12 01:38:13.927985 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Mar 12 01:38:13.940831 kernel: loop1: detected capacity change from 0 to 217752 Mar 12 01:38:13.940727 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 12 01:38:13.950175 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Mar 12 01:38:13.959380 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 12 01:38:13.960521 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Mar 12 01:38:13.973898 udevadm[1187]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Mar 12 01:38:13.999711 kernel: loop2: detected capacity change from 0 to 142488 Mar 12 01:38:14.004789 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 12 01:38:14.015908 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 12 01:38:14.047135 systemd-tmpfiles[1192]: ACLs are not supported, ignoring. Mar 12 01:38:14.047630 systemd-tmpfiles[1192]: ACLs are not supported, ignoring. Mar 12 01:38:14.052734 kernel: loop3: detected capacity change from 0 to 140768 Mar 12 01:38:14.055487 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 12 01:38:14.071872 kernel: loop4: detected capacity change from 0 to 217752 Mar 12 01:38:14.088737 kernel: loop5: detected capacity change from 0 to 142488 Mar 12 01:38:14.105891 (sd-merge)[1195]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Mar 12 01:38:14.106850 (sd-merge)[1195]: Merged extensions into '/usr'. Mar 12 01:38:14.113510 systemd[1]: Reloading requested from client PID 1169 ('systemd-sysext') (unit systemd-sysext.service)... Mar 12 01:38:14.113546 systemd[1]: Reloading... Mar 12 01:38:14.185745 zram_generator::config[1220]: No configuration found. Mar 12 01:38:14.255115 ldconfig[1164]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 12 01:38:14.305505 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 12 01:38:14.350934 systemd[1]: Reloading finished in 236 ms. Mar 12 01:38:14.390859 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 12 01:38:14.394874 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 12 01:38:14.398623 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 12 01:38:14.421122 systemd[1]: Starting ensure-sysext.service... Mar 12 01:38:14.425487 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 12 01:38:14.430453 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 12 01:38:14.436258 systemd[1]: Reloading requested from client PID 1260 ('systemctl') (unit ensure-sysext.service)... Mar 12 01:38:14.436278 systemd[1]: Reloading... Mar 12 01:38:14.457057 systemd-tmpfiles[1261]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 12 01:38:14.457578 systemd-tmpfiles[1261]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 12 01:38:14.459404 systemd-tmpfiles[1261]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 12 01:38:14.460033 systemd-tmpfiles[1261]: ACLs are not supported, ignoring. Mar 12 01:38:14.460414 systemd-tmpfiles[1261]: ACLs are not supported, ignoring. Mar 12 01:38:14.463776 systemd-udevd[1262]: Using default interface naming scheme 'v255'. Mar 12 01:38:14.466065 systemd-tmpfiles[1261]: Detected autofs mount point /boot during canonicalization of boot. Mar 12 01:38:14.466092 systemd-tmpfiles[1261]: Skipping /boot Mar 12 01:38:14.485588 systemd-tmpfiles[1261]: Detected autofs mount point /boot during canonicalization of boot. Mar 12 01:38:14.485607 systemd-tmpfiles[1261]: Skipping /boot Mar 12 01:38:14.517754 zram_generator::config[1294]: No configuration found. Mar 12 01:38:14.596700 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (1299) Mar 12 01:38:14.636779 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Mar 12 01:38:14.648909 kernel: ACPI: button: Power Button [PWRF] Mar 12 01:38:14.654788 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Mar 12 01:38:14.657839 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Mar 12 01:38:14.663272 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Mar 12 01:38:14.663551 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Mar 12 01:38:14.673102 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 12 01:38:14.674699 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Mar 12 01:38:14.725727 kernel: mousedev: PS/2 mouse device common for all mice Mar 12 01:38:14.733213 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Mar 12 01:38:14.733417 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 12 01:38:14.739412 systemd[1]: Reloading finished in 302 ms. Mar 12 01:38:14.760029 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 12 01:38:14.818498 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 12 01:38:14.837764 kernel: kvm_amd: TSC scaling supported Mar 12 01:38:14.837839 kernel: kvm_amd: Nested Virtualization enabled Mar 12 01:38:14.837857 kernel: kvm_amd: Nested Paging enabled Mar 12 01:38:14.840583 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Mar 12 01:38:14.840610 kernel: kvm_amd: PMU virtualization is disabled Mar 12 01:38:14.861356 systemd[1]: Finished ensure-sysext.service. Mar 12 01:38:14.895745 kernel: EDAC MC: Ver: 3.0.0 Mar 12 01:38:14.900823 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 12 01:38:14.912937 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Mar 12 01:38:14.918742 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 12 01:38:14.922730 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 12 01:38:14.927868 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 12 01:38:14.935360 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 12 01:38:14.942897 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 12 01:38:14.948067 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 12 01:38:14.952838 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 12 01:38:14.954202 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 12 01:38:14.960156 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 12 01:38:14.968890 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 12 01:38:14.977048 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 12 01:38:14.981510 augenrules[1384]: No rules Mar 12 01:38:14.984747 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Mar 12 01:38:14.990892 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 12 01:38:14.998892 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 12 01:38:15.002453 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 12 01:38:15.004193 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Mar 12 01:38:15.008804 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Mar 12 01:38:15.012849 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 12 01:38:15.017058 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 12 01:38:15.017355 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 12 01:38:15.021178 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 12 01:38:15.021489 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 12 01:38:15.025140 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 12 01:38:15.030890 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 12 01:38:15.035099 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 12 01:38:15.035397 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 12 01:38:15.039033 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 12 01:38:15.042997 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 12 01:38:15.065147 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Mar 12 01:38:15.065330 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 12 01:38:15.065578 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 12 01:38:15.067801 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 12 01:38:15.074019 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 12 01:38:15.074108 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 12 01:38:15.075919 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 12 01:38:15.083699 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 12 01:38:15.092922 lvm[1404]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 12 01:38:15.111391 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 12 01:38:15.125351 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 12 01:38:15.137217 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Mar 12 01:38:15.142389 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 12 01:38:15.149896 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Mar 12 01:38:15.163100 lvm[1421]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 12 01:38:15.187026 systemd-networkd[1381]: lo: Link UP Mar 12 01:38:15.187034 systemd-networkd[1381]: lo: Gained carrier Mar 12 01:38:15.188706 systemd-networkd[1381]: Enumeration completed Mar 12 01:38:15.188825 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 12 01:38:15.189447 systemd-networkd[1381]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 12 01:38:15.189473 systemd-networkd[1381]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 12 01:38:15.190559 systemd-networkd[1381]: eth0: Link UP Mar 12 01:38:15.190568 systemd-networkd[1381]: eth0: Gained carrier Mar 12 01:38:15.190579 systemd-networkd[1381]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 12 01:38:15.200818 systemd-resolved[1385]: Positive Trust Anchors: Mar 12 01:38:15.200969 systemd-resolved[1385]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 12 01:38:15.200997 systemd-resolved[1385]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 12 01:38:15.202855 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 12 01:38:15.205602 systemd-resolved[1385]: Defaulting to hostname 'linux'. Mar 12 01:38:15.208916 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 12 01:38:15.212364 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Mar 12 01:38:15.216469 systemd[1]: Reached target network.target - Network. Mar 12 01:38:15.218993 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 12 01:38:15.224730 systemd-networkd[1381]: eth0: DHCPv4 address 10.0.0.156/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 12 01:38:15.227615 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Mar 12 01:38:15.231196 systemd[1]: Reached target sysinit.target - System Initialization. Mar 12 01:38:15.234189 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 12 01:38:15.237567 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 12 01:38:15.241176 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 12 01:38:15.244637 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 12 01:38:15.244722 systemd[1]: Reached target paths.target - Path Units. Mar 12 01:38:15.247225 systemd[1]: Reached target time-set.target - System Time Set. Mar 12 01:38:15.250058 systemd-timesyncd[1390]: Contacted time server 10.0.0.1:123 (10.0.0.1). Mar 12 01:38:15.250133 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 12 01:38:15.250142 systemd-timesyncd[1390]: Initial clock synchronization to Thu 2026-03-12 01:38:15.483049 UTC. Mar 12 01:38:15.253262 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 12 01:38:15.256740 systemd[1]: Reached target timers.target - Timer Units. Mar 12 01:38:15.259917 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 12 01:38:15.264398 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 12 01:38:15.278439 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 12 01:38:15.282203 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 12 01:38:15.285093 systemd[1]: Reached target sockets.target - Socket Units. Mar 12 01:38:15.287764 systemd[1]: Reached target basic.target - Basic System. Mar 12 01:38:15.290190 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 12 01:38:15.290244 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 12 01:38:15.291806 systemd[1]: Starting containerd.service - containerd container runtime... Mar 12 01:38:15.295951 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 12 01:38:15.299702 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 12 01:38:15.304092 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 12 01:38:15.307854 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 12 01:38:15.309425 jq[1430]: false Mar 12 01:38:15.309456 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 12 01:38:15.311132 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 12 01:38:15.320984 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 12 01:38:15.327975 extend-filesystems[1431]: Found loop3 Mar 12 01:38:15.329257 extend-filesystems[1431]: Found loop4 Mar 12 01:38:15.329257 extend-filesystems[1431]: Found loop5 Mar 12 01:38:15.329257 extend-filesystems[1431]: Found sr0 Mar 12 01:38:15.329257 extend-filesystems[1431]: Found vda Mar 12 01:38:15.329257 extend-filesystems[1431]: Found vda1 Mar 12 01:38:15.329257 extend-filesystems[1431]: Found vda2 Mar 12 01:38:15.329257 extend-filesystems[1431]: Found vda3 Mar 12 01:38:15.329257 extend-filesystems[1431]: Found usr Mar 12 01:38:15.329257 extend-filesystems[1431]: Found vda4 Mar 12 01:38:15.329257 extend-filesystems[1431]: Found vda6 Mar 12 01:38:15.329257 extend-filesystems[1431]: Found vda7 Mar 12 01:38:15.329257 extend-filesystems[1431]: Found vda9 Mar 12 01:38:15.329257 extend-filesystems[1431]: Checking size of /dev/vda9 Mar 12 01:38:15.328582 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 12 01:38:15.348996 dbus-daemon[1429]: [system] SELinux support is enabled Mar 12 01:38:15.358554 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 12 01:38:15.361611 extend-filesystems[1431]: Resized partition /dev/vda9 Mar 12 01:38:15.371791 extend-filesystems[1450]: resize2fs 1.47.1 (20-May-2024) Mar 12 01:38:15.362580 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 12 01:38:15.363056 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 12 01:38:15.364117 systemd[1]: Starting update-engine.service - Update Engine... Mar 12 01:38:15.374909 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 12 01:38:15.386721 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (1310) Mar 12 01:38:15.387590 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 12 01:38:15.395761 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Mar 12 01:38:15.401446 jq[1452]: true Mar 12 01:38:15.410195 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 12 01:38:15.410546 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 12 01:38:15.411031 systemd[1]: motdgen.service: Deactivated successfully. Mar 12 01:38:15.411380 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 12 01:38:15.416513 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 12 01:38:15.416833 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 12 01:38:15.435848 update_engine[1451]: I20260312 01:38:15.431923 1451 main.cc:92] Flatcar Update Engine starting Mar 12 01:38:15.449432 update_engine[1451]: I20260312 01:38:15.449216 1451 update_check_scheduler.cc:74] Next update check in 4m46s Mar 12 01:38:15.451508 jq[1456]: true Mar 12 01:38:15.478930 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Mar 12 01:38:15.452020 (ntainerd)[1457]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 12 01:38:15.480130 systemd[1]: Started update-engine.service - Update Engine. Mar 12 01:38:15.483613 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 12 01:38:15.483796 extend-filesystems[1450]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Mar 12 01:38:15.483796 extend-filesystems[1450]: old_desc_blocks = 1, new_desc_blocks = 1 Mar 12 01:38:15.483796 extend-filesystems[1450]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Mar 12 01:38:15.498072 extend-filesystems[1431]: Resized filesystem in /dev/vda9 Mar 12 01:38:15.484010 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 12 01:38:15.511048 bash[1483]: Updated "/home/core/.ssh/authorized_keys" Mar 12 01:38:15.511200 sshd_keygen[1446]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 12 01:38:15.487210 systemd-logind[1448]: Watching system buttons on /dev/input/event1 (Power Button) Mar 12 01:38:15.487231 systemd-logind[1448]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Mar 12 01:38:15.491414 systemd-logind[1448]: New seat seat0. Mar 12 01:38:15.492477 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 12 01:38:15.492500 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 12 01:38:15.512970 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 12 01:38:15.516045 systemd[1]: Started systemd-logind.service - User Login Management. Mar 12 01:38:15.519624 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 12 01:38:15.520108 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 12 01:38:15.529934 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 12 01:38:15.537937 tar[1455]: linux-amd64/LICENSE Mar 12 01:38:15.538441 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Mar 12 01:38:15.539504 tar[1455]: linux-amd64/helm Mar 12 01:38:15.551346 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 12 01:38:15.560159 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 12 01:38:15.565838 locksmithd[1484]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 12 01:38:15.579032 systemd[1]: issuegen.service: Deactivated successfully. Mar 12 01:38:15.579485 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 12 01:38:15.595108 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 12 01:38:15.611360 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 12 01:38:15.628944 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 12 01:38:15.634083 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Mar 12 01:38:15.638986 systemd[1]: Reached target getty.target - Login Prompts. Mar 12 01:38:15.654699 containerd[1457]: time="2026-03-12T01:38:15.654478215Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Mar 12 01:38:15.684583 containerd[1457]: time="2026-03-12T01:38:15.684479810Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Mar 12 01:38:15.687513 containerd[1457]: time="2026-03-12T01:38:15.687477959Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Mar 12 01:38:15.687619 containerd[1457]: time="2026-03-12T01:38:15.687599506Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Mar 12 01:38:15.687766 containerd[1457]: time="2026-03-12T01:38:15.687747121Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Mar 12 01:38:15.688044 containerd[1457]: time="2026-03-12T01:38:15.688020572Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Mar 12 01:38:15.688188 containerd[1457]: time="2026-03-12T01:38:15.688167617Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Mar 12 01:38:15.688393 containerd[1457]: time="2026-03-12T01:38:15.688365686Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Mar 12 01:38:15.688469 containerd[1457]: time="2026-03-12T01:38:15.688449072Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Mar 12 01:38:15.688854 containerd[1457]: time="2026-03-12T01:38:15.688825615Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 12 01:38:15.688932 containerd[1457]: time="2026-03-12T01:38:15.688911887Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Mar 12 01:38:15.689000 containerd[1457]: time="2026-03-12T01:38:15.688983590Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Mar 12 01:38:15.689080 containerd[1457]: time="2026-03-12T01:38:15.689061556Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Mar 12 01:38:15.689255 containerd[1457]: time="2026-03-12T01:38:15.689233207Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Mar 12 01:38:15.689781 containerd[1457]: time="2026-03-12T01:38:15.689759279Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Mar 12 01:38:15.690046 containerd[1457]: time="2026-03-12T01:38:15.690019485Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 12 01:38:15.690113 containerd[1457]: time="2026-03-12T01:38:15.690098542Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Mar 12 01:38:15.690334 containerd[1457]: time="2026-03-12T01:38:15.690277587Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Mar 12 01:38:15.690496 containerd[1457]: time="2026-03-12T01:38:15.690473713Z" level=info msg="metadata content store policy set" policy=shared Mar 12 01:38:15.697172 containerd[1457]: time="2026-03-12T01:38:15.697104813Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Mar 12 01:38:15.697172 containerd[1457]: time="2026-03-12T01:38:15.697160708Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Mar 12 01:38:15.697252 containerd[1457]: time="2026-03-12T01:38:15.697176187Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Mar 12 01:38:15.697252 containerd[1457]: time="2026-03-12T01:38:15.697190854Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Mar 12 01:38:15.697252 containerd[1457]: time="2026-03-12T01:38:15.697202806Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Mar 12 01:38:15.697396 containerd[1457]: time="2026-03-12T01:38:15.697371712Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Mar 12 01:38:15.697709 containerd[1457]: time="2026-03-12T01:38:15.697632219Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Mar 12 01:38:15.697869 containerd[1457]: time="2026-03-12T01:38:15.697818216Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Mar 12 01:38:15.697869 containerd[1457]: time="2026-03-12T01:38:15.697855906Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Mar 12 01:38:15.697869 containerd[1457]: time="2026-03-12T01:38:15.697868720Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Mar 12 01:38:15.697936 containerd[1457]: time="2026-03-12T01:38:15.697880522Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Mar 12 01:38:15.697936 containerd[1457]: time="2026-03-12T01:38:15.697891513Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Mar 12 01:38:15.697936 containerd[1457]: time="2026-03-12T01:38:15.697901361Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Mar 12 01:38:15.697936 containerd[1457]: time="2026-03-12T01:38:15.697912922Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Mar 12 01:38:15.697936 containerd[1457]: time="2026-03-12T01:38:15.697925166Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Mar 12 01:38:15.698009 containerd[1457]: time="2026-03-12T01:38:15.697946215Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Mar 12 01:38:15.698009 containerd[1457]: time="2026-03-12T01:38:15.697961222Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Mar 12 01:38:15.698009 containerd[1457]: time="2026-03-12T01:38:15.697970550Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Mar 12 01:38:15.698009 containerd[1457]: time="2026-03-12T01:38:15.697986770Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Mar 12 01:38:15.698009 containerd[1457]: time="2026-03-12T01:38:15.698002409Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Mar 12 01:38:15.698091 containerd[1457]: time="2026-03-12T01:38:15.698012809Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Mar 12 01:38:15.698091 containerd[1457]: time="2026-03-12T01:38:15.698023659Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Mar 12 01:38:15.698091 containerd[1457]: time="2026-03-12T01:38:15.698035852Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Mar 12 01:38:15.698091 containerd[1457]: time="2026-03-12T01:38:15.698056190Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Mar 12 01:38:15.698091 containerd[1457]: time="2026-03-12T01:38:15.698074183Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Mar 12 01:38:15.698166 containerd[1457]: time="2026-03-12T01:38:15.698094912Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Mar 12 01:38:15.698166 containerd[1457]: time="2026-03-12T01:38:15.698114970Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Mar 12 01:38:15.698166 containerd[1457]: time="2026-03-12T01:38:15.698137752Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Mar 12 01:38:15.698166 containerd[1457]: time="2026-03-12T01:38:15.698157499Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Mar 12 01:38:15.698227 containerd[1457]: time="2026-03-12T01:38:15.698175453Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Mar 12 01:38:15.698227 containerd[1457]: time="2026-03-12T01:38:15.698185672Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Mar 12 01:38:15.698227 containerd[1457]: time="2026-03-12T01:38:15.698198325Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Mar 12 01:38:15.698227 containerd[1457]: time="2026-03-12T01:38:15.698215447Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Mar 12 01:38:15.698292 containerd[1457]: time="2026-03-12T01:38:15.698227029Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Mar 12 01:38:15.698292 containerd[1457]: time="2026-03-12T01:38:15.698236357Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Mar 12 01:38:15.698292 containerd[1457]: time="2026-03-12T01:38:15.698279107Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Mar 12 01:38:15.698372 containerd[1457]: time="2026-03-12T01:38:15.698292411Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Mar 12 01:38:15.698372 containerd[1457]: time="2026-03-12T01:38:15.698341893Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Mar 12 01:38:15.698372 containerd[1457]: time="2026-03-12T01:38:15.698353465Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Mar 12 01:38:15.698372 containerd[1457]: time="2026-03-12T01:38:15.698362302Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Mar 12 01:38:15.698438 containerd[1457]: time="2026-03-12T01:38:15.698379113Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Mar 12 01:38:15.698438 containerd[1457]: time="2026-03-12T01:38:15.698389402Z" level=info msg="NRI interface is disabled by configuration." Mar 12 01:38:15.698438 containerd[1457]: time="2026-03-12T01:38:15.698402326Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Mar 12 01:38:15.698760 containerd[1457]: time="2026-03-12T01:38:15.698622738Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Mar 12 01:38:15.698915 containerd[1457]: time="2026-03-12T01:38:15.698778709Z" level=info msg="Connect containerd service" Mar 12 01:38:15.698915 containerd[1457]: time="2026-03-12T01:38:15.698827079Z" level=info msg="using legacy CRI server" Mar 12 01:38:15.698915 containerd[1457]: time="2026-03-12T01:38:15.698837469Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 12 01:38:15.698964 containerd[1457]: time="2026-03-12T01:38:15.698943657Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Mar 12 01:38:15.699802 containerd[1457]: time="2026-03-12T01:38:15.699768758Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 12 01:38:15.700453 containerd[1457]: time="2026-03-12T01:38:15.699967121Z" level=info msg="Start subscribing containerd event" Mar 12 01:38:15.700453 containerd[1457]: time="2026-03-12T01:38:15.700042912Z" level=info msg="Start recovering state" Mar 12 01:38:15.700453 containerd[1457]: time="2026-03-12T01:38:15.700099477Z" level=info msg="Start event monitor" Mar 12 01:38:15.700453 containerd[1457]: time="2026-03-12T01:38:15.700112059Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 12 01:38:15.700453 containerd[1457]: time="2026-03-12T01:38:15.700121008Z" level=info msg="Start snapshots syncer" Mar 12 01:38:15.700453 containerd[1457]: time="2026-03-12T01:38:15.700171883Z" level=info msg="Start cni network conf syncer for default" Mar 12 01:38:15.700453 containerd[1457]: time="2026-03-12T01:38:15.700207780Z" level=info msg="Start streaming server" Mar 12 01:38:15.700453 containerd[1457]: time="2026-03-12T01:38:15.700256048Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 12 01:38:15.700453 containerd[1457]: time="2026-03-12T01:38:15.700362497Z" level=info msg="containerd successfully booted in 0.047220s" Mar 12 01:38:15.700499 systemd[1]: Started containerd.service - containerd container runtime. Mar 12 01:38:15.974882 tar[1455]: linux-amd64/README.md Mar 12 01:38:15.990853 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 12 01:38:16.293184 systemd-networkd[1381]: eth0: Gained IPv6LL Mar 12 01:38:16.297604 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 12 01:38:16.301894 systemd[1]: Reached target network-online.target - Network is Online. Mar 12 01:38:16.312078 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Mar 12 01:38:16.316378 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 12 01:38:16.320879 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 12 01:38:16.343514 systemd[1]: coreos-metadata.service: Deactivated successfully. Mar 12 01:38:16.343790 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Mar 12 01:38:16.348472 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 12 01:38:16.349793 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 12 01:38:17.123801 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 12 01:38:17.127979 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 12 01:38:17.132075 systemd[1]: Startup finished in 1.072s (kernel) + 6.837s (initrd) + 4.577s (userspace) = 12.487s. Mar 12 01:38:17.132957 (kubelet)[1543]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 12 01:38:17.523795 kubelet[1543]: E0312 01:38:17.523617 1543 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 12 01:38:17.527071 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 12 01:38:17.527297 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 12 01:38:19.113378 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 12 01:38:19.115008 systemd[1]: Started sshd@0-10.0.0.156:22-10.0.0.1:53642.service - OpenSSH per-connection server daemon (10.0.0.1:53642). Mar 12 01:38:19.171374 sshd[1557]: Accepted publickey for core from 10.0.0.1 port 53642 ssh2: RSA SHA256:wDPRTEstPbTKCdTT2qUtCtXrMhNyTtgk56EQ+eNGih8 Mar 12 01:38:19.174131 sshd[1557]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 01:38:19.186096 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 12 01:38:19.202087 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 12 01:38:19.204329 systemd-logind[1448]: New session 1 of user core. Mar 12 01:38:19.216509 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 12 01:38:19.219485 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 12 01:38:19.234058 (systemd)[1561]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 12 01:38:19.365177 systemd[1561]: Queued start job for default target default.target. Mar 12 01:38:19.380333 systemd[1561]: Created slice app.slice - User Application Slice. Mar 12 01:38:19.380409 systemd[1561]: Reached target paths.target - Paths. Mar 12 01:38:19.380423 systemd[1561]: Reached target timers.target - Timers. Mar 12 01:38:19.382259 systemd[1561]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 12 01:38:19.395850 systemd[1561]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 12 01:38:19.396040 systemd[1561]: Reached target sockets.target - Sockets. Mar 12 01:38:19.396088 systemd[1561]: Reached target basic.target - Basic System. Mar 12 01:38:19.396142 systemd[1561]: Reached target default.target - Main User Target. Mar 12 01:38:19.396197 systemd[1561]: Startup finished in 153ms. Mar 12 01:38:19.396383 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 12 01:38:19.398359 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 12 01:38:19.473151 systemd[1]: Started sshd@1-10.0.0.156:22-10.0.0.1:53646.service - OpenSSH per-connection server daemon (10.0.0.1:53646). Mar 12 01:38:19.514757 sshd[1572]: Accepted publickey for core from 10.0.0.1 port 53646 ssh2: RSA SHA256:wDPRTEstPbTKCdTT2qUtCtXrMhNyTtgk56EQ+eNGih8 Mar 12 01:38:19.516822 sshd[1572]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 01:38:19.522993 systemd-logind[1448]: New session 2 of user core. Mar 12 01:38:19.532873 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 12 01:38:19.591395 sshd[1572]: pam_unix(sshd:session): session closed for user core Mar 12 01:38:19.598330 systemd[1]: sshd@1-10.0.0.156:22-10.0.0.1:53646.service: Deactivated successfully. Mar 12 01:38:19.600017 systemd[1]: session-2.scope: Deactivated successfully. Mar 12 01:38:19.601546 systemd-logind[1448]: Session 2 logged out. Waiting for processes to exit. Mar 12 01:38:19.623201 systemd[1]: Started sshd@2-10.0.0.156:22-10.0.0.1:53660.service - OpenSSH per-connection server daemon (10.0.0.1:53660). Mar 12 01:38:19.624757 systemd-logind[1448]: Removed session 2. Mar 12 01:38:19.665641 sshd[1579]: Accepted publickey for core from 10.0.0.1 port 53660 ssh2: RSA SHA256:wDPRTEstPbTKCdTT2qUtCtXrMhNyTtgk56EQ+eNGih8 Mar 12 01:38:19.667287 sshd[1579]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 01:38:19.672385 systemd-logind[1448]: New session 3 of user core. Mar 12 01:38:19.681841 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 12 01:38:19.734431 sshd[1579]: pam_unix(sshd:session): session closed for user core Mar 12 01:38:19.745028 systemd[1]: sshd@2-10.0.0.156:22-10.0.0.1:53660.service: Deactivated successfully. Mar 12 01:38:19.747072 systemd[1]: session-3.scope: Deactivated successfully. Mar 12 01:38:19.748943 systemd-logind[1448]: Session 3 logged out. Waiting for processes to exit. Mar 12 01:38:19.765187 systemd[1]: Started sshd@3-10.0.0.156:22-10.0.0.1:53664.service - OpenSSH per-connection server daemon (10.0.0.1:53664). Mar 12 01:38:19.766544 systemd-logind[1448]: Removed session 3. Mar 12 01:38:19.801199 sshd[1586]: Accepted publickey for core from 10.0.0.1 port 53664 ssh2: RSA SHA256:wDPRTEstPbTKCdTT2qUtCtXrMhNyTtgk56EQ+eNGih8 Mar 12 01:38:19.803195 sshd[1586]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 01:38:19.809060 systemd-logind[1448]: New session 4 of user core. Mar 12 01:38:19.830954 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 12 01:38:19.890404 sshd[1586]: pam_unix(sshd:session): session closed for user core Mar 12 01:38:19.907037 systemd[1]: sshd@3-10.0.0.156:22-10.0.0.1:53664.service: Deactivated successfully. Mar 12 01:38:19.909476 systemd[1]: session-4.scope: Deactivated successfully. Mar 12 01:38:19.911766 systemd-logind[1448]: Session 4 logged out. Waiting for processes to exit. Mar 12 01:38:19.922160 systemd[1]: Started sshd@4-10.0.0.156:22-10.0.0.1:53680.service - OpenSSH per-connection server daemon (10.0.0.1:53680). Mar 12 01:38:19.923827 systemd-logind[1448]: Removed session 4. Mar 12 01:38:19.961589 sshd[1593]: Accepted publickey for core from 10.0.0.1 port 53680 ssh2: RSA SHA256:wDPRTEstPbTKCdTT2qUtCtXrMhNyTtgk56EQ+eNGih8 Mar 12 01:38:19.963519 sshd[1593]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 01:38:19.968641 systemd-logind[1448]: New session 5 of user core. Mar 12 01:38:19.978836 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 12 01:38:20.052062 sudo[1596]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 12 01:38:20.052497 sudo[1596]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 12 01:38:20.069421 sudo[1596]: pam_unix(sudo:session): session closed for user root Mar 12 01:38:20.071345 sshd[1593]: pam_unix(sshd:session): session closed for user core Mar 12 01:38:20.086311 systemd[1]: sshd@4-10.0.0.156:22-10.0.0.1:53680.service: Deactivated successfully. Mar 12 01:38:20.087816 systemd[1]: session-5.scope: Deactivated successfully. Mar 12 01:38:20.089294 systemd-logind[1448]: Session 5 logged out. Waiting for processes to exit. Mar 12 01:38:20.090727 systemd[1]: Started sshd@5-10.0.0.156:22-10.0.0.1:53682.service - OpenSSH per-connection server daemon (10.0.0.1:53682). Mar 12 01:38:20.091604 systemd-logind[1448]: Removed session 5. Mar 12 01:38:20.130430 sshd[1601]: Accepted publickey for core from 10.0.0.1 port 53682 ssh2: RSA SHA256:wDPRTEstPbTKCdTT2qUtCtXrMhNyTtgk56EQ+eNGih8 Mar 12 01:38:20.131886 sshd[1601]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 01:38:20.136271 systemd-logind[1448]: New session 6 of user core. Mar 12 01:38:20.150819 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 12 01:38:20.207759 sudo[1605]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 12 01:38:20.208275 sudo[1605]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 12 01:38:20.213049 sudo[1605]: pam_unix(sudo:session): session closed for user root Mar 12 01:38:20.219580 sudo[1604]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Mar 12 01:38:20.220006 sudo[1604]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 12 01:38:20.238944 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Mar 12 01:38:20.241029 auditctl[1608]: No rules Mar 12 01:38:20.241411 systemd[1]: audit-rules.service: Deactivated successfully. Mar 12 01:38:20.241742 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Mar 12 01:38:20.244345 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Mar 12 01:38:20.278162 augenrules[1626]: No rules Mar 12 01:38:20.279126 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Mar 12 01:38:20.280365 sudo[1604]: pam_unix(sudo:session): session closed for user root Mar 12 01:38:20.282562 sshd[1601]: pam_unix(sshd:session): session closed for user core Mar 12 01:38:20.304342 systemd[1]: sshd@5-10.0.0.156:22-10.0.0.1:53682.service: Deactivated successfully. Mar 12 01:38:20.305933 systemd[1]: session-6.scope: Deactivated successfully. Mar 12 01:38:20.307477 systemd-logind[1448]: Session 6 logged out. Waiting for processes to exit. Mar 12 01:38:20.315973 systemd[1]: Started sshd@6-10.0.0.156:22-10.0.0.1:53694.service - OpenSSH per-connection server daemon (10.0.0.1:53694). Mar 12 01:38:20.317215 systemd-logind[1448]: Removed session 6. Mar 12 01:38:20.350760 sshd[1634]: Accepted publickey for core from 10.0.0.1 port 53694 ssh2: RSA SHA256:wDPRTEstPbTKCdTT2qUtCtXrMhNyTtgk56EQ+eNGih8 Mar 12 01:38:20.352159 sshd[1634]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 01:38:20.356928 systemd-logind[1448]: New session 7 of user core. Mar 12 01:38:20.370900 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 12 01:38:20.426297 sudo[1637]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 12 01:38:20.426764 sudo[1637]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 12 01:38:20.729108 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 12 01:38:20.729267 (dockerd)[1655]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 12 01:38:21.017015 dockerd[1655]: time="2026-03-12T01:38:21.016760785Z" level=info msg="Starting up" Mar 12 01:38:21.274983 dockerd[1655]: time="2026-03-12T01:38:21.274805075Z" level=info msg="Loading containers: start." Mar 12 01:38:21.420797 kernel: Initializing XFRM netlink socket Mar 12 01:38:21.520504 systemd-networkd[1381]: docker0: Link UP Mar 12 01:38:21.564835 dockerd[1655]: time="2026-03-12T01:38:21.564637905Z" level=info msg="Loading containers: done." Mar 12 01:38:21.582368 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck591722921-merged.mount: Deactivated successfully. Mar 12 01:38:21.585469 dockerd[1655]: time="2026-03-12T01:38:21.585377531Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 12 01:38:21.585562 dockerd[1655]: time="2026-03-12T01:38:21.585548904Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Mar 12 01:38:21.585803 dockerd[1655]: time="2026-03-12T01:38:21.585739085Z" level=info msg="Daemon has completed initialization" Mar 12 01:38:21.628830 dockerd[1655]: time="2026-03-12T01:38:21.628755005Z" level=info msg="API listen on /run/docker.sock" Mar 12 01:38:21.629299 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 12 01:38:22.059888 containerd[1457]: time="2026-03-12T01:38:22.059740292Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.35.2\"" Mar 12 01:38:22.629629 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1838240538.mount: Deactivated successfully. Mar 12 01:38:23.793389 containerd[1457]: time="2026-03-12T01:38:23.793279962Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.35.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:38:23.794362 containerd[1457]: time="2026-03-12T01:38:23.794272176Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.35.2: active requests=0, bytes read=27696467" Mar 12 01:38:23.796229 containerd[1457]: time="2026-03-12T01:38:23.796123308Z" level=info msg="ImageCreate event name:\"sha256:66108468ce51257077e642f2f509cd61d470029036a7954a1a47ca15b2706dda\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:38:23.800347 containerd[1457]: time="2026-03-12T01:38:23.800276276Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:68cdc586f13b13edb7aa30a18155be530136a39cfd5ef8672aad8ccc98f0a7f7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:38:23.802455 containerd[1457]: time="2026-03-12T01:38:23.802391359Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.35.2\" with image id \"sha256:66108468ce51257077e642f2f509cd61d470029036a7954a1a47ca15b2706dda\", repo tag \"registry.k8s.io/kube-apiserver:v1.35.2\", repo digest \"registry.k8s.io/kube-apiserver@sha256:68cdc586f13b13edb7aa30a18155be530136a39cfd5ef8672aad8ccc98f0a7f7\", size \"27693066\" in 1.742596458s" Mar 12 01:38:23.802455 containerd[1457]: time="2026-03-12T01:38:23.802445779Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.35.2\" returns image reference \"sha256:66108468ce51257077e642f2f509cd61d470029036a7954a1a47ca15b2706dda\"" Mar 12 01:38:23.803544 containerd[1457]: time="2026-03-12T01:38:23.803505534Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.35.2\"" Mar 12 01:38:25.137482 containerd[1457]: time="2026-03-12T01:38:25.137397337Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.35.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:38:25.138764 containerd[1457]: time="2026-03-12T01:38:25.138700824Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.35.2: active requests=0, bytes read=21450700" Mar 12 01:38:25.140356 containerd[1457]: time="2026-03-12T01:38:25.140214594Z" level=info msg="ImageCreate event name:\"sha256:0f2dd35011c05b55a97c9304ae1d36cfd58499cc1fd3dd8ccdf6efef1144e36a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:38:25.144747 containerd[1457]: time="2026-03-12T01:38:25.144647365Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:d9784320a41dd1b155c0ad8fdb5823d60c475870f3dd23865edde36b585748f2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:38:25.146328 containerd[1457]: time="2026-03-12T01:38:25.146278418Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.35.2\" with image id \"sha256:0f2dd35011c05b55a97c9304ae1d36cfd58499cc1fd3dd8ccdf6efef1144e36a\", repo tag \"registry.k8s.io/kube-controller-manager:v1.35.2\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:d9784320a41dd1b155c0ad8fdb5823d60c475870f3dd23865edde36b585748f2\", size \"23142311\" in 1.342733054s" Mar 12 01:38:25.146401 containerd[1457]: time="2026-03-12T01:38:25.146330617Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.35.2\" returns image reference \"sha256:0f2dd35011c05b55a97c9304ae1d36cfd58499cc1fd3dd8ccdf6efef1144e36a\"" Mar 12 01:38:25.147190 containerd[1457]: time="2026-03-12T01:38:25.146990565Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.35.2\"" Mar 12 01:38:26.222414 containerd[1457]: time="2026-03-12T01:38:26.222316529Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.35.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:38:26.225279 containerd[1457]: time="2026-03-12T01:38:26.225053724Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.35.2: active requests=0, bytes read=15548429" Mar 12 01:38:26.225682 containerd[1457]: time="2026-03-12T01:38:26.225593691Z" level=info msg="ImageCreate event name:\"sha256:ee83c410d7938aa1752b4e79a8d51f03710b4becc23b2e095fba471049fb2914\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:38:26.229678 containerd[1457]: time="2026-03-12T01:38:26.229596266Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:5833e2c4b779215efe7a48126c067de199e86aa5a86518693adeef16db0ff943\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:38:26.231504 containerd[1457]: time="2026-03-12T01:38:26.231458242Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.35.2\" with image id \"sha256:ee83c410d7938aa1752b4e79a8d51f03710b4becc23b2e095fba471049fb2914\", repo tag \"registry.k8s.io/kube-scheduler:v1.35.2\", repo digest \"registry.k8s.io/kube-scheduler@sha256:5833e2c4b779215efe7a48126c067de199e86aa5a86518693adeef16db0ff943\", size \"17240058\" in 1.084432367s" Mar 12 01:38:26.231504 containerd[1457]: time="2026-03-12T01:38:26.231497077Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.35.2\" returns image reference \"sha256:ee83c410d7938aa1752b4e79a8d51f03710b4becc23b2e095fba471049fb2914\"" Mar 12 01:38:26.232151 containerd[1457]: time="2026-03-12T01:38:26.232037675Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.35.2\"" Mar 12 01:38:27.220235 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3487909202.mount: Deactivated successfully. Mar 12 01:38:27.516247 containerd[1457]: time="2026-03-12T01:38:27.516069941Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.35.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:38:27.517303 containerd[1457]: time="2026-03-12T01:38:27.517251064Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.35.2: active requests=0, bytes read=25685312" Mar 12 01:38:27.518359 containerd[1457]: time="2026-03-12T01:38:27.518312731Z" level=info msg="ImageCreate event name:\"sha256:3c471cf273e44f68c91b48985c27627d581915b9ee5e72f7227bbf2146008b5e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:38:27.520538 containerd[1457]: time="2026-03-12T01:38:27.520502577Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:015265214cc874b593a7adccdcfe4ac15d2b8e9ae89881bdcd5bcb99d42e1862\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:38:27.521426 containerd[1457]: time="2026-03-12T01:38:27.521318127Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.35.2\" with image id \"sha256:3c471cf273e44f68c91b48985c27627d581915b9ee5e72f7227bbf2146008b5e\", repo tag \"registry.k8s.io/kube-proxy:v1.35.2\", repo digest \"registry.k8s.io/kube-proxy@sha256:015265214cc874b593a7adccdcfe4ac15d2b8e9ae89881bdcd5bcb99d42e1862\", size \"25684331\" in 1.289167476s" Mar 12 01:38:27.521426 containerd[1457]: time="2026-03-12T01:38:27.521370057Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.35.2\" returns image reference \"sha256:3c471cf273e44f68c91b48985c27627d581915b9ee5e72f7227bbf2146008b5e\"" Mar 12 01:38:27.522032 containerd[1457]: time="2026-03-12T01:38:27.521984545Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.13.1\"" Mar 12 01:38:27.777565 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 12 01:38:27.792049 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 12 01:38:27.970859 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 12 01:38:27.976461 (kubelet)[1887]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 12 01:38:27.996482 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1185464617.mount: Deactivated successfully. Mar 12 01:38:29.114459 kubelet[1887]: E0312 01:38:28.331607 1887 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 12 01:38:29.121335 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 12 01:38:29.121772 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 12 01:38:31.503919 containerd[1457]: time="2026-03-12T01:38:31.503824928Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.13.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:38:31.504775 containerd[1457]: time="2026-03-12T01:38:31.504686487Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.13.1: active requests=0, bytes read=23556542" Mar 12 01:38:31.506776 containerd[1457]: time="2026-03-12T01:38:31.506627774Z" level=info msg="ImageCreate event name:\"sha256:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:38:31.513327 containerd[1457]: time="2026-03-12T01:38:31.513241970Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:38:31.515624 containerd[1457]: time="2026-03-12T01:38:31.515427923Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.13.1\" with image id \"sha256:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139\", repo tag \"registry.k8s.io/coredns/coredns:v1.13.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6\", size \"23553139\" in 3.993385603s" Mar 12 01:38:31.515624 containerd[1457]: time="2026-03-12T01:38:31.515589913Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.13.1\" returns image reference \"sha256:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139\"" Mar 12 01:38:31.517611 containerd[1457]: time="2026-03-12T01:38:31.517573139Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Mar 12 01:38:31.961200 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3667075457.mount: Deactivated successfully. Mar 12 01:38:31.973421 containerd[1457]: time="2026-03-12T01:38:31.973340794Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:38:31.974843 containerd[1457]: time="2026-03-12T01:38:31.974755999Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=321218" Mar 12 01:38:31.976445 containerd[1457]: time="2026-03-12T01:38:31.976352428Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:38:31.981999 containerd[1457]: time="2026-03-12T01:38:31.981901313Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:38:31.985441 containerd[1457]: time="2026-03-12T01:38:31.985343757Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 467.728592ms" Mar 12 01:38:31.985441 containerd[1457]: time="2026-03-12T01:38:31.985424300Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Mar 12 01:38:32.103802 containerd[1457]: time="2026-03-12T01:38:31.986892894Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.6-0\"" Mar 12 01:38:32.571156 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1415418539.mount: Deactivated successfully. Mar 12 01:38:34.022519 containerd[1457]: time="2026-03-12T01:38:34.022341935Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.6-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:38:34.023591 containerd[1457]: time="2026-03-12T01:38:34.023476014Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.6-0: active requests=0, bytes read=23630322" Mar 12 01:38:34.024992 containerd[1457]: time="2026-03-12T01:38:34.024931060Z" level=info msg="ImageCreate event name:\"sha256:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:38:34.028772 containerd[1457]: time="2026-03-12T01:38:34.028619619Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:38:34.030108 containerd[1457]: time="2026-03-12T01:38:34.029992732Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.6-0\" with image id \"sha256:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2\", repo tag \"registry.k8s.io/etcd:3.6.6-0\", repo digest \"registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890\", size \"23641797\" in 2.043070562s" Mar 12 01:38:34.030108 containerd[1457]: time="2026-03-12T01:38:34.030057497Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.6-0\" returns image reference \"sha256:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2\"" Mar 12 01:38:35.525551 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 12 01:38:35.539938 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 12 01:38:35.566418 systemd[1]: Reloading requested from client PID 2049 ('systemctl') (unit session-7.scope)... Mar 12 01:38:35.566449 systemd[1]: Reloading... Mar 12 01:38:35.678767 zram_generator::config[2088]: No configuration found. Mar 12 01:38:35.923694 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 12 01:38:36.003604 systemd[1]: Reloading finished in 436 ms. Mar 12 01:38:36.062730 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 12 01:38:36.067567 (kubelet)[2127]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 12 01:38:36.070956 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 12 01:38:36.071995 systemd[1]: kubelet.service: Deactivated successfully. Mar 12 01:38:36.072398 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 12 01:38:36.076087 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 12 01:38:36.253318 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 12 01:38:36.261116 (kubelet)[2139]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 12 01:38:36.567002 kubelet[2139]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 12 01:38:36.794151 kubelet[2139]: I0312 01:38:36.794066 2139 server.go:525] "Kubelet version" kubeletVersion="v1.35.1" Mar 12 01:38:36.794151 kubelet[2139]: I0312 01:38:36.794116 2139 server.go:527] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 12 01:38:36.794151 kubelet[2139]: I0312 01:38:36.794134 2139 watchdog_linux.go:95] "Systemd watchdog is not enabled" Mar 12 01:38:36.794151 kubelet[2139]: I0312 01:38:36.794140 2139 watchdog_linux.go:138] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 12 01:38:36.794359 kubelet[2139]: I0312 01:38:36.794328 2139 server.go:951] "Client rotation is on, will bootstrap in background" Mar 12 01:38:36.854948 kubelet[2139]: I0312 01:38:36.854314 2139 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 12 01:38:36.854948 kubelet[2139]: E0312 01:38:36.854903 2139 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.156:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.156:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 12 01:38:36.859212 kubelet[2139]: E0312 01:38:36.859122 2139 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 12 01:38:36.859284 kubelet[2139]: I0312 01:38:36.859225 2139 server.go:1395] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Mar 12 01:38:36.866694 kubelet[2139]: I0312 01:38:36.866615 2139 server.go:775] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Mar 12 01:38:36.867973 kubelet[2139]: I0312 01:38:36.867884 2139 container_manager_linux.go:272] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 12 01:38:36.868104 kubelet[2139]: I0312 01:38:36.867950 2139 container_manager_linux.go:277] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 12 01:38:36.868104 kubelet[2139]: I0312 01:38:36.868100 2139 topology_manager.go:143] "Creating topology manager with none policy" Mar 12 01:38:36.868456 kubelet[2139]: I0312 01:38:36.868109 2139 container_manager_linux.go:308] "Creating device plugin manager" Mar 12 01:38:36.868456 kubelet[2139]: I0312 01:38:36.868262 2139 container_manager_linux.go:317] "Creating Dynamic Resource Allocation (DRA) manager" Mar 12 01:38:36.872109 kubelet[2139]: I0312 01:38:36.871918 2139 state_mem.go:41] "Initialized" logger="CPUManager state memory" Mar 12 01:38:36.873216 kubelet[2139]: I0312 01:38:36.873062 2139 kubelet.go:482] "Attempting to sync node with API server" Mar 12 01:38:36.873216 kubelet[2139]: I0312 01:38:36.873140 2139 kubelet.go:383] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 12 01:38:36.873310 kubelet[2139]: I0312 01:38:36.873222 2139 kubelet.go:394] "Adding apiserver pod source" Mar 12 01:38:36.873310 kubelet[2139]: I0312 01:38:36.873238 2139 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 12 01:38:36.888713 kubelet[2139]: I0312 01:38:36.887429 2139 kuberuntime_manager.go:294] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Mar 12 01:38:36.889423 kubelet[2139]: I0312 01:38:36.889311 2139 kubelet.go:943] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 12 01:38:36.889471 kubelet[2139]: I0312 01:38:36.889425 2139 kubelet.go:970] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Mar 12 01:38:36.889690 kubelet[2139]: W0312 01:38:36.889519 2139 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 12 01:38:36.894742 kubelet[2139]: I0312 01:38:36.894641 2139 server.go:1257] "Started kubelet" Mar 12 01:38:36.895598 kubelet[2139]: I0312 01:38:36.895407 2139 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 12 01:38:36.895598 kubelet[2139]: I0312 01:38:36.895575 2139 server_v1.go:49] "podresources" method="list" useActivePods=true Mar 12 01:38:36.896212 kubelet[2139]: I0312 01:38:36.896157 2139 server.go:254] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 12 01:38:36.896372 kubelet[2139]: I0312 01:38:36.896280 2139 server.go:182] "Starting to listen" address="0.0.0.0" port=10250 Mar 12 01:38:36.896623 kubelet[2139]: I0312 01:38:36.896604 2139 fs_resource_analyzer.go:69] "Starting FS ResourceAnalyzer" Mar 12 01:38:36.899709 kubelet[2139]: I0312 01:38:36.897440 2139 server.go:317] "Adding debug handlers to kubelet server" Mar 12 01:38:36.899709 kubelet[2139]: I0312 01:38:36.898880 2139 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 12 01:38:36.902424 kubelet[2139]: I0312 01:38:36.901811 2139 volume_manager.go:311] "Starting Kubelet Volume Manager" Mar 12 01:38:36.902424 kubelet[2139]: E0312 01:38:36.902109 2139 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 12 01:38:36.902977 kubelet[2139]: I0312 01:38:36.902897 2139 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 12 01:38:36.903037 kubelet[2139]: I0312 01:38:36.902985 2139 reconciler.go:29] "Reconciler: start to sync state" Mar 12 01:38:36.903472 kubelet[2139]: E0312 01:38:36.903387 2139 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.156:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.156:6443: connect: connection refused" interval="200ms" Mar 12 01:38:36.904882 kubelet[2139]: E0312 01:38:36.903622 2139 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.156:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.156:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.189bf4534bfa7512 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-03-12 01:38:36.894590226 +0000 UTC m=+0.624281011,LastTimestamp:2026-03-12 01:38:36.894590226 +0000 UTC m=+0.624281011,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Mar 12 01:38:36.905831 kubelet[2139]: E0312 01:38:36.905786 2139 kubelet.go:1656] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 12 01:38:36.905831 kubelet[2139]: I0312 01:38:36.905782 2139 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 12 01:38:36.906777 kubelet[2139]: I0312 01:38:36.906738 2139 factory.go:223] Registration of the containerd container factory successfully Mar 12 01:38:36.906777 kubelet[2139]: I0312 01:38:36.906768 2139 factory.go:223] Registration of the systemd container factory successfully Mar 12 01:38:36.916540 kubelet[2139]: I0312 01:38:36.916430 2139 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Mar 12 01:38:36.922068 kubelet[2139]: I0312 01:38:36.922032 2139 cpu_manager.go:225] "Starting" policy="none" Mar 12 01:38:36.922068 kubelet[2139]: I0312 01:38:36.922045 2139 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Mar 12 01:38:36.922068 kubelet[2139]: I0312 01:38:36.922060 2139 state_mem.go:41] "Initialized" logger="CPUManager state checkpoint.CPUManager state memory" Mar 12 01:38:36.925837 kubelet[2139]: I0312 01:38:36.925798 2139 policy_none.go:50] "Start" Mar 12 01:38:36.925837 kubelet[2139]: I0312 01:38:36.925838 2139 memory_manager.go:187] "Starting memorymanager" policy="None" Mar 12 01:38:36.925970 kubelet[2139]: I0312 01:38:36.925854 2139 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Mar 12 01:38:36.928462 kubelet[2139]: I0312 01:38:36.928360 2139 policy_none.go:44] "Start" Mar 12 01:38:36.937472 kubelet[2139]: I0312 01:38:36.935921 2139 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Mar 12 01:38:36.937472 kubelet[2139]: I0312 01:38:36.935944 2139 status_manager.go:249] "Starting to sync pod status with apiserver" Mar 12 01:38:36.937472 kubelet[2139]: I0312 01:38:36.935966 2139 kubelet.go:2501] "Starting kubelet main sync loop" Mar 12 01:38:36.937472 kubelet[2139]: E0312 01:38:36.936040 2139 kubelet.go:2525] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 12 01:38:36.938079 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Mar 12 01:38:36.978021 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Mar 12 01:38:37.004743 kubelet[2139]: E0312 01:38:37.004612 2139 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 12 01:38:37.009236 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Mar 12 01:38:37.016891 kubelet[2139]: E0312 01:38:37.016763 2139 manager.go:525] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 12 01:38:37.017249 kubelet[2139]: I0312 01:38:37.017206 2139 eviction_manager.go:194] "Eviction manager: starting control loop" Mar 12 01:38:37.017361 kubelet[2139]: I0312 01:38:37.017271 2139 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 12 01:38:37.017879 kubelet[2139]: I0312 01:38:37.017853 2139 plugin_manager.go:121] "Starting Kubelet Plugin Manager" Mar 12 01:38:37.023219 kubelet[2139]: E0312 01:38:37.022538 2139 eviction_manager.go:272] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 12 01:38:37.023345 kubelet[2139]: E0312 01:38:37.023318 2139 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Mar 12 01:38:37.061125 systemd[1]: Created slice kubepods-burstable-pod730944a7ac9fe21d1f22f3d275590dab.slice - libcontainer container kubepods-burstable-pod730944a7ac9fe21d1f22f3d275590dab.slice. Mar 12 01:38:37.094504 kubelet[2139]: E0312 01:38:37.094411 2139 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 12 01:38:37.109502 kubelet[2139]: E0312 01:38:37.106756 2139 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.156:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.156:6443: connect: connection refused" interval="400ms" Mar 12 01:38:37.112123 systemd[1]: Created slice kubepods-burstable-podf420dd303687d038b2bc2fa1d277c55c.slice - libcontainer container kubepods-burstable-podf420dd303687d038b2bc2fa1d277c55c.slice. Mar 12 01:38:37.132016 kubelet[2139]: E0312 01:38:37.131961 2139 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 12 01:38:37.135222 kubelet[2139]: I0312 01:38:37.135035 2139 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Mar 12 01:38:37.142965 kubelet[2139]: E0312 01:38:37.142882 2139 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.156:6443/api/v1/nodes\": dial tcp 10.0.0.156:6443: connect: connection refused" node="localhost" Mar 12 01:38:37.142948 systemd[1]: Created slice kubepods-burstable-podbd81bb6a14e176da833e3a8030ee5eac.slice - libcontainer container kubepods-burstable-podbd81bb6a14e176da833e3a8030ee5eac.slice. Mar 12 01:38:37.147871 kubelet[2139]: E0312 01:38:37.147463 2139 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 12 01:38:37.209424 kubelet[2139]: I0312 01:38:37.209315 2139 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 12 01:38:37.209995 kubelet[2139]: I0312 01:38:37.209456 2139 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/bd81bb6a14e176da833e3a8030ee5eac-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"bd81bb6a14e176da833e3a8030ee5eac\") " pod="kube-system/kube-scheduler-localhost" Mar 12 01:38:37.212360 kubelet[2139]: I0312 01:38:37.212154 2139 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/730944a7ac9fe21d1f22f3d275590dab-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"730944a7ac9fe21d1f22f3d275590dab\") " pod="kube-system/kube-apiserver-localhost" Mar 12 01:38:37.214141 kubelet[2139]: I0312 01:38:37.213984 2139 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 12 01:38:37.214141 kubelet[2139]: I0312 01:38:37.214093 2139 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 12 01:38:37.214141 kubelet[2139]: I0312 01:38:37.214136 2139 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/730944a7ac9fe21d1f22f3d275590dab-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"730944a7ac9fe21d1f22f3d275590dab\") " pod="kube-system/kube-apiserver-localhost" Mar 12 01:38:37.214361 kubelet[2139]: I0312 01:38:37.214167 2139 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/730944a7ac9fe21d1f22f3d275590dab-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"730944a7ac9fe21d1f22f3d275590dab\") " pod="kube-system/kube-apiserver-localhost" Mar 12 01:38:37.214361 kubelet[2139]: I0312 01:38:37.214191 2139 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 12 01:38:37.214361 kubelet[2139]: I0312 01:38:37.214214 2139 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 12 01:38:37.346628 kubelet[2139]: I0312 01:38:37.346496 2139 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Mar 12 01:38:37.347277 kubelet[2139]: E0312 01:38:37.347247 2139 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.156:6443/api/v1/nodes\": dial tcp 10.0.0.156:6443: connect: connection refused" node="localhost" Mar 12 01:38:37.399377 kubelet[2139]: E0312 01:38:37.399129 2139 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:38:37.402196 containerd[1457]: time="2026-03-12T01:38:37.402022928Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:730944a7ac9fe21d1f22f3d275590dab,Namespace:kube-system,Attempt:0,}" Mar 12 01:38:37.443275 kubelet[2139]: E0312 01:38:37.443119 2139 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:38:37.444520 containerd[1457]: time="2026-03-12T01:38:37.444382619Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:f420dd303687d038b2bc2fa1d277c55c,Namespace:kube-system,Attempt:0,}" Mar 12 01:38:37.459180 kubelet[2139]: E0312 01:38:37.459085 2139 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:38:37.460120 containerd[1457]: time="2026-03-12T01:38:37.460042472Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:bd81bb6a14e176da833e3a8030ee5eac,Namespace:kube-system,Attempt:0,}" Mar 12 01:38:37.516680 kubelet[2139]: E0312 01:38:37.516540 2139 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.156:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.156:6443: connect: connection refused" interval="800ms" Mar 12 01:38:37.750727 kubelet[2139]: I0312 01:38:37.750538 2139 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Mar 12 01:38:37.751395 kubelet[2139]: E0312 01:38:37.751145 2139 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.156:6443/api/v1/nodes\": dial tcp 10.0.0.156:6443: connect: connection refused" node="localhost" Mar 12 01:38:37.849948 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3491192998.mount: Deactivated successfully. Mar 12 01:38:37.855420 containerd[1457]: time="2026-03-12T01:38:37.855367608Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 12 01:38:37.859266 containerd[1457]: time="2026-03-12T01:38:37.859061261Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Mar 12 01:38:37.860514 containerd[1457]: time="2026-03-12T01:38:37.860414997Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 12 01:38:37.861705 containerd[1457]: time="2026-03-12T01:38:37.861560735Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 12 01:38:37.862732 containerd[1457]: time="2026-03-12T01:38:37.862705973Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 12 01:38:37.863961 containerd[1457]: time="2026-03-12T01:38:37.863833995Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 12 01:38:37.865040 containerd[1457]: time="2026-03-12T01:38:37.864896587Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 12 01:38:37.867252 containerd[1457]: time="2026-03-12T01:38:37.867213294Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 12 01:38:37.870290 containerd[1457]: time="2026-03-12T01:38:37.870218560Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 410.064248ms" Mar 12 01:38:37.873634 containerd[1457]: time="2026-03-12T01:38:37.872961890Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 470.725501ms" Mar 12 01:38:37.877952 containerd[1457]: time="2026-03-12T01:38:37.877913220Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 433.411555ms" Mar 12 01:38:38.084118 kernel: hrtimer: interrupt took 3947881 ns Mar 12 01:38:38.176842 containerd[1457]: time="2026-03-12T01:38:38.176719737Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 12 01:38:38.176842 containerd[1457]: time="2026-03-12T01:38:38.176831832Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 12 01:38:38.177043 containerd[1457]: time="2026-03-12T01:38:38.176872411Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 01:38:38.177164 containerd[1457]: time="2026-03-12T01:38:38.177083758Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 01:38:38.182446 containerd[1457]: time="2026-03-12T01:38:38.182304545Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 12 01:38:38.182446 containerd[1457]: time="2026-03-12T01:38:38.182409515Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 12 01:38:38.182446 containerd[1457]: time="2026-03-12T01:38:38.182424738Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 01:38:38.182709 containerd[1457]: time="2026-03-12T01:38:38.182502094Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 01:38:38.195367 containerd[1457]: time="2026-03-12T01:38:38.195164336Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 12 01:38:38.195602 containerd[1457]: time="2026-03-12T01:38:38.195534730Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 12 01:38:38.197011 containerd[1457]: time="2026-03-12T01:38:38.196918389Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 01:38:38.197296 containerd[1457]: time="2026-03-12T01:38:38.197172232Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 01:38:38.317886 kubelet[2139]: E0312 01:38:38.317841 2139 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.156:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.156:6443: connect: connection refused" interval="1.6s" Mar 12 01:38:38.346924 systemd[1]: Started cri-containerd-110a3c4dc66fe2fc4bbede4ef8ae82b42712efedd6db83530d76eb79d014a99d.scope - libcontainer container 110a3c4dc66fe2fc4bbede4ef8ae82b42712efedd6db83530d76eb79d014a99d. Mar 12 01:38:38.354268 systemd[1]: Started cri-containerd-482e1e1adff5bc4980af6da8bc38903d10f7051e8ef09a33ad365d0a2490aea1.scope - libcontainer container 482e1e1adff5bc4980af6da8bc38903d10f7051e8ef09a33ad365d0a2490aea1. Mar 12 01:38:38.361396 systemd[1]: Started cri-containerd-71bcf8a23e817b9d1e02ff9cd5215caee98491a4e938a70d3b79ee9ed4c2077b.scope - libcontainer container 71bcf8a23e817b9d1e02ff9cd5215caee98491a4e938a70d3b79ee9ed4c2077b. Mar 12 01:38:38.443818 containerd[1457]: time="2026-03-12T01:38:38.443322950Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:f420dd303687d038b2bc2fa1d277c55c,Namespace:kube-system,Attempt:0,} returns sandbox id \"482e1e1adff5bc4980af6da8bc38903d10f7051e8ef09a33ad365d0a2490aea1\"" Mar 12 01:38:38.445417 kubelet[2139]: E0312 01:38:38.445367 2139 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:38:38.453166 containerd[1457]: time="2026-03-12T01:38:38.453078970Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:bd81bb6a14e176da833e3a8030ee5eac,Namespace:kube-system,Attempt:0,} returns sandbox id \"110a3c4dc66fe2fc4bbede4ef8ae82b42712efedd6db83530d76eb79d014a99d\"" Mar 12 01:38:38.454218 kubelet[2139]: E0312 01:38:38.454098 2139 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:38:38.457683 containerd[1457]: time="2026-03-12T01:38:38.456875800Z" level=info msg="CreateContainer within sandbox \"482e1e1adff5bc4980af6da8bc38903d10f7051e8ef09a33ad365d0a2490aea1\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 12 01:38:38.459746 containerd[1457]: time="2026-03-12T01:38:38.459723057Z" level=info msg="CreateContainer within sandbox \"110a3c4dc66fe2fc4bbede4ef8ae82b42712efedd6db83530d76eb79d014a99d\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 12 01:38:38.481947 containerd[1457]: time="2026-03-12T01:38:38.481837521Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:730944a7ac9fe21d1f22f3d275590dab,Namespace:kube-system,Attempt:0,} returns sandbox id \"71bcf8a23e817b9d1e02ff9cd5215caee98491a4e938a70d3b79ee9ed4c2077b\"" Mar 12 01:38:38.483349 kubelet[2139]: E0312 01:38:38.483064 2139 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:38:38.487986 containerd[1457]: time="2026-03-12T01:38:38.487925934Z" level=info msg="CreateContainer within sandbox \"71bcf8a23e817b9d1e02ff9cd5215caee98491a4e938a70d3b79ee9ed4c2077b\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 12 01:38:38.491443 containerd[1457]: time="2026-03-12T01:38:38.491365099Z" level=info msg="CreateContainer within sandbox \"482e1e1adff5bc4980af6da8bc38903d10f7051e8ef09a33ad365d0a2490aea1\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"4922a8f745d1a7e44bb7725eaa4f54453ca648ef666925e6d5dd4d89fcca16c9\"" Mar 12 01:38:38.493920 containerd[1457]: time="2026-03-12T01:38:38.493875959Z" level=info msg="CreateContainer within sandbox \"110a3c4dc66fe2fc4bbede4ef8ae82b42712efedd6db83530d76eb79d014a99d\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"b0bee9b0e4f61f5180e9064fdea467c14a9bceb9fb0bda11951f927c2c8340b3\"" Mar 12 01:38:38.494148 containerd[1457]: time="2026-03-12T01:38:38.494110155Z" level=info msg="StartContainer for \"4922a8f745d1a7e44bb7725eaa4f54453ca648ef666925e6d5dd4d89fcca16c9\"" Mar 12 01:38:38.496263 containerd[1457]: time="2026-03-12T01:38:38.496205874Z" level=info msg="StartContainer for \"b0bee9b0e4f61f5180e9064fdea467c14a9bceb9fb0bda11951f927c2c8340b3\"" Mar 12 01:38:38.509723 containerd[1457]: time="2026-03-12T01:38:38.509587347Z" level=info msg="CreateContainer within sandbox \"71bcf8a23e817b9d1e02ff9cd5215caee98491a4e938a70d3b79ee9ed4c2077b\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"5f5c2304d2256eedae1764df260699ca56a30f7e31c805d4dea2b6a78eaa72f9\"" Mar 12 01:38:38.511474 containerd[1457]: time="2026-03-12T01:38:38.511409660Z" level=info msg="StartContainer for \"5f5c2304d2256eedae1764df260699ca56a30f7e31c805d4dea2b6a78eaa72f9\"" Mar 12 01:38:38.550889 systemd[1]: Started cri-containerd-5f5c2304d2256eedae1764df260699ca56a30f7e31c805d4dea2b6a78eaa72f9.scope - libcontainer container 5f5c2304d2256eedae1764df260699ca56a30f7e31c805d4dea2b6a78eaa72f9. Mar 12 01:38:38.554066 kubelet[2139]: I0312 01:38:38.553851 2139 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Mar 12 01:38:38.555709 kubelet[2139]: E0312 01:38:38.554589 2139 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.156:6443/api/v1/nodes\": dial tcp 10.0.0.156:6443: connect: connection refused" node="localhost" Mar 12 01:38:38.558850 systemd[1]: Started cri-containerd-4922a8f745d1a7e44bb7725eaa4f54453ca648ef666925e6d5dd4d89fcca16c9.scope - libcontainer container 4922a8f745d1a7e44bb7725eaa4f54453ca648ef666925e6d5dd4d89fcca16c9. Mar 12 01:38:38.562159 systemd[1]: Started cri-containerd-b0bee9b0e4f61f5180e9064fdea467c14a9bceb9fb0bda11951f927c2c8340b3.scope - libcontainer container b0bee9b0e4f61f5180e9064fdea467c14a9bceb9fb0bda11951f927c2c8340b3. Mar 12 01:38:38.849086 containerd[1457]: time="2026-03-12T01:38:38.848848376Z" level=info msg="StartContainer for \"5f5c2304d2256eedae1764df260699ca56a30f7e31c805d4dea2b6a78eaa72f9\" returns successfully" Mar 12 01:38:38.859542 containerd[1457]: time="2026-03-12T01:38:38.859429119Z" level=info msg="StartContainer for \"b0bee9b0e4f61f5180e9064fdea467c14a9bceb9fb0bda11951f927c2c8340b3\" returns successfully" Mar 12 01:38:38.954184 kubelet[2139]: E0312 01:38:38.954116 2139 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.156:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.156:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 12 01:38:38.968146 containerd[1457]: time="2026-03-12T01:38:38.968072483Z" level=info msg="StartContainer for \"4922a8f745d1a7e44bb7725eaa4f54453ca648ef666925e6d5dd4d89fcca16c9\" returns successfully" Mar 12 01:38:38.972188 kubelet[2139]: E0312 01:38:38.972048 2139 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 12 01:38:38.974323 kubelet[2139]: E0312 01:38:38.974222 2139 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:38:38.982528 kubelet[2139]: E0312 01:38:38.981750 2139 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 12 01:38:38.982528 kubelet[2139]: E0312 01:38:38.981978 2139 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:38:38.984872 kubelet[2139]: E0312 01:38:38.984816 2139 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 12 01:38:38.985244 kubelet[2139]: E0312 01:38:38.985147 2139 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:38:39.992309 kubelet[2139]: E0312 01:38:39.992173 2139 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 12 01:38:39.993407 kubelet[2139]: E0312 01:38:39.993339 2139 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 12 01:38:39.993755 kubelet[2139]: E0312 01:38:39.993718 2139 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:38:39.993996 kubelet[2139]: E0312 01:38:39.993955 2139 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:38:39.994151 kubelet[2139]: E0312 01:38:39.992320 2139 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 12 01:38:39.994429 kubelet[2139]: E0312 01:38:39.994385 2139 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:38:40.195167 kubelet[2139]: I0312 01:38:40.181900 2139 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Mar 12 01:38:40.993771 kubelet[2139]: E0312 01:38:40.993713 2139 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 12 01:38:40.994198 kubelet[2139]: E0312 01:38:40.993857 2139 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:38:41.925091 kubelet[2139]: E0312 01:38:41.925031 2139 nodelease.go:50] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Mar 12 01:38:41.989237 kubelet[2139]: I0312 01:38:41.989114 2139 kubelet_node_status.go:77] "Successfully registered node" node="localhost" Mar 12 01:38:42.002726 kubelet[2139]: I0312 01:38:42.002632 2139 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 12 01:38:42.144994 kubelet[2139]: E0312 01:38:42.144954 2139 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Mar 12 01:38:42.145382 kubelet[2139]: I0312 01:38:42.145194 2139 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Mar 12 01:38:42.148954 kubelet[2139]: E0312 01:38:42.148872 2139 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Mar 12 01:38:42.148954 kubelet[2139]: I0312 01:38:42.148906 2139 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 12 01:38:42.150964 kubelet[2139]: E0312 01:38:42.150903 2139 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Mar 12 01:38:42.913763 kubelet[2139]: I0312 01:38:42.911933 2139 apiserver.go:52] "Watching apiserver" Mar 12 01:38:43.003887 kubelet[2139]: I0312 01:38:43.003818 2139 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 12 01:38:44.250789 systemd[1]: Reloading requested from client PID 2427 ('systemctl') (unit session-7.scope)... Mar 12 01:38:44.250835 systemd[1]: Reloading... Mar 12 01:38:44.258562 kubelet[2139]: I0312 01:38:44.258489 2139 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 12 01:38:44.275789 kubelet[2139]: E0312 01:38:44.275758 2139 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:38:44.395745 zram_generator::config[2466]: No configuration found. Mar 12 01:38:44.506105 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 12 01:38:44.592015 systemd[1]: Reloading finished in 340 ms. Mar 12 01:38:44.636044 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 12 01:38:44.655356 systemd[1]: kubelet.service: Deactivated successfully. Mar 12 01:38:44.655756 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 12 01:38:44.655821 systemd[1]: kubelet.service: Consumed 2.700s CPU time, 127.3M memory peak, 0B memory swap peak. Mar 12 01:38:44.668059 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 12 01:38:44.828354 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 12 01:38:44.833202 (kubelet)[2511]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 12 01:38:44.993981 kubelet[2511]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 12 01:38:45.004373 kubelet[2511]: I0312 01:38:45.004244 2511 server.go:525] "Kubelet version" kubeletVersion="v1.35.1" Mar 12 01:38:45.004373 kubelet[2511]: I0312 01:38:45.004292 2511 server.go:527] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 12 01:38:45.004373 kubelet[2511]: I0312 01:38:45.004304 2511 watchdog_linux.go:95] "Systemd watchdog is not enabled" Mar 12 01:38:45.004373 kubelet[2511]: I0312 01:38:45.004313 2511 watchdog_linux.go:138] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 12 01:38:45.004863 kubelet[2511]: I0312 01:38:45.004632 2511 server.go:951] "Client rotation is on, will bootstrap in background" Mar 12 01:38:45.006077 kubelet[2511]: I0312 01:38:45.006012 2511 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Mar 12 01:38:45.008040 kubelet[2511]: I0312 01:38:45.007958 2511 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 12 01:38:45.015376 kubelet[2511]: E0312 01:38:45.015348 2511 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 12 01:38:45.015419 kubelet[2511]: I0312 01:38:45.015398 2511 server.go:1395] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Mar 12 01:38:45.023456 kubelet[2511]: I0312 01:38:45.023409 2511 server.go:775] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Mar 12 01:38:45.023834 kubelet[2511]: I0312 01:38:45.023738 2511 container_manager_linux.go:272] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 12 01:38:45.023990 kubelet[2511]: I0312 01:38:45.023806 2511 container_manager_linux.go:277] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 12 01:38:45.023990 kubelet[2511]: I0312 01:38:45.023967 2511 topology_manager.go:143] "Creating topology manager with none policy" Mar 12 01:38:45.023990 kubelet[2511]: I0312 01:38:45.023976 2511 container_manager_linux.go:308] "Creating device plugin manager" Mar 12 01:38:45.024252 kubelet[2511]: I0312 01:38:45.023996 2511 container_manager_linux.go:317] "Creating Dynamic Resource Allocation (DRA) manager" Mar 12 01:38:45.024252 kubelet[2511]: I0312 01:38:45.024224 2511 state_mem.go:41] "Initialized" logger="CPUManager state memory" Mar 12 01:38:45.024483 kubelet[2511]: I0312 01:38:45.024443 2511 kubelet.go:482] "Attempting to sync node with API server" Mar 12 01:38:45.024483 kubelet[2511]: I0312 01:38:45.024462 2511 kubelet.go:383] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 12 01:38:45.024483 kubelet[2511]: I0312 01:38:45.024485 2511 kubelet.go:394] "Adding apiserver pod source" Mar 12 01:38:45.025223 kubelet[2511]: I0312 01:38:45.024497 2511 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 12 01:38:45.026295 kubelet[2511]: I0312 01:38:45.026268 2511 kuberuntime_manager.go:294] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Mar 12 01:38:45.028709 kubelet[2511]: I0312 01:38:45.027469 2511 kubelet.go:943] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 12 01:38:45.028709 kubelet[2511]: I0312 01:38:45.027505 2511 kubelet.go:970] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Mar 12 01:38:45.032095 kubelet[2511]: I0312 01:38:45.032052 2511 server.go:1257] "Started kubelet" Mar 12 01:38:45.033077 kubelet[2511]: I0312 01:38:45.033011 2511 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 12 01:38:45.033299 kubelet[2511]: I0312 01:38:45.033140 2511 server_v1.go:49] "podresources" method="list" useActivePods=true Mar 12 01:38:45.033515 kubelet[2511]: I0312 01:38:45.033399 2511 server.go:254] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 12 01:38:45.033515 kubelet[2511]: I0312 01:38:45.033489 2511 server.go:182] "Starting to listen" address="0.0.0.0" port=10250 Mar 12 01:38:45.035207 kubelet[2511]: I0312 01:38:45.035194 2511 fs_resource_analyzer.go:69] "Starting FS ResourceAnalyzer" Mar 12 01:38:45.035834 kubelet[2511]: I0312 01:38:45.035719 2511 server.go:317] "Adding debug handlers to kubelet server" Mar 12 01:38:45.044918 kubelet[2511]: I0312 01:38:45.044814 2511 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 12 01:38:45.047393 kubelet[2511]: I0312 01:38:45.047337 2511 volume_manager.go:311] "Starting Kubelet Volume Manager" Mar 12 01:38:45.048462 kubelet[2511]: E0312 01:38:45.048442 2511 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 12 01:38:45.050628 kubelet[2511]: I0312 01:38:45.050554 2511 reconciler.go:29] "Reconciler: start to sync state" Mar 12 01:38:45.051719 kubelet[2511]: I0312 01:38:45.051617 2511 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 12 01:38:45.058286 kubelet[2511]: I0312 01:38:45.058041 2511 factory.go:223] Registration of the systemd container factory successfully Mar 12 01:38:45.058286 kubelet[2511]: I0312 01:38:45.058174 2511 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 12 01:38:45.065360 kubelet[2511]: I0312 01:38:45.065331 2511 factory.go:223] Registration of the containerd container factory successfully Mar 12 01:38:45.073274 kubelet[2511]: E0312 01:38:45.073211 2511 kubelet.go:1656] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 12 01:38:45.077395 kubelet[2511]: I0312 01:38:45.077267 2511 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Mar 12 01:38:45.084710 kubelet[2511]: I0312 01:38:45.084544 2511 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Mar 12 01:38:45.084710 kubelet[2511]: I0312 01:38:45.084599 2511 status_manager.go:249] "Starting to sync pod status with apiserver" Mar 12 01:38:45.084710 kubelet[2511]: I0312 01:38:45.084626 2511 kubelet.go:2501] "Starting kubelet main sync loop" Mar 12 01:38:45.084817 kubelet[2511]: E0312 01:38:45.084747 2511 kubelet.go:2525] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 12 01:38:45.187461 kubelet[2511]: E0312 01:38:45.186561 2511 kubelet.go:2525] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 12 01:38:45.234002 kubelet[2511]: I0312 01:38:45.233952 2511 cpu_manager.go:225] "Starting" policy="none" Mar 12 01:38:45.234002 kubelet[2511]: I0312 01:38:45.233988 2511 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Mar 12 01:38:45.234002 kubelet[2511]: I0312 01:38:45.234013 2511 state_mem.go:41] "Initialized" logger="CPUManager state checkpoint.CPUManager state memory" Mar 12 01:38:45.235521 kubelet[2511]: I0312 01:38:45.234267 2511 state_mem.go:94] "Updated default CPUSet" logger="CPUManager state checkpoint.CPUManager state memory" cpuSet="" Mar 12 01:38:45.235521 kubelet[2511]: I0312 01:38:45.234310 2511 state_mem.go:102] "Updated CPUSet assignments" logger="CPUManager state checkpoint.CPUManager state memory" assignments={} Mar 12 01:38:45.235521 kubelet[2511]: I0312 01:38:45.234376 2511 policy_none.go:50] "Start" Mar 12 01:38:45.235521 kubelet[2511]: I0312 01:38:45.234388 2511 memory_manager.go:187] "Starting memorymanager" policy="None" Mar 12 01:38:45.235521 kubelet[2511]: I0312 01:38:45.234403 2511 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Mar 12 01:38:45.235521 kubelet[2511]: I0312 01:38:45.234737 2511 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Mar 12 01:38:45.235521 kubelet[2511]: I0312 01:38:45.234751 2511 policy_none.go:44] "Start" Mar 12 01:38:45.246287 kubelet[2511]: E0312 01:38:45.246189 2511 manager.go:525] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 12 01:38:45.246474 kubelet[2511]: I0312 01:38:45.246418 2511 eviction_manager.go:194] "Eviction manager: starting control loop" Mar 12 01:38:45.246474 kubelet[2511]: I0312 01:38:45.246449 2511 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 12 01:38:45.246892 kubelet[2511]: I0312 01:38:45.246763 2511 plugin_manager.go:121] "Starting Kubelet Plugin Manager" Mar 12 01:38:45.249605 kubelet[2511]: E0312 01:38:45.248524 2511 eviction_manager.go:272] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 12 01:38:45.360838 kubelet[2511]: I0312 01:38:45.360705 2511 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Mar 12 01:38:45.370472 kubelet[2511]: I0312 01:38:45.370229 2511 kubelet_node_status.go:123] "Node was previously registered" node="localhost" Mar 12 01:38:45.370472 kubelet[2511]: I0312 01:38:45.370334 2511 kubelet_node_status.go:77] "Successfully registered node" node="localhost" Mar 12 01:38:45.388960 kubelet[2511]: I0312 01:38:45.388884 2511 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 12 01:38:45.391343 kubelet[2511]: I0312 01:38:45.388890 2511 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Mar 12 01:38:45.391343 kubelet[2511]: I0312 01:38:45.388890 2511 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 12 01:38:45.400282 kubelet[2511]: E0312 01:38:45.400214 2511 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Mar 12 01:38:45.458988 kubelet[2511]: I0312 01:38:45.458874 2511 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/730944a7ac9fe21d1f22f3d275590dab-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"730944a7ac9fe21d1f22f3d275590dab\") " pod="kube-system/kube-apiserver-localhost" Mar 12 01:38:45.458988 kubelet[2511]: I0312 01:38:45.458923 2511 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/730944a7ac9fe21d1f22f3d275590dab-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"730944a7ac9fe21d1f22f3d275590dab\") " pod="kube-system/kube-apiserver-localhost" Mar 12 01:38:45.458988 kubelet[2511]: I0312 01:38:45.458943 2511 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/730944a7ac9fe21d1f22f3d275590dab-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"730944a7ac9fe21d1f22f3d275590dab\") " pod="kube-system/kube-apiserver-localhost" Mar 12 01:38:45.458988 kubelet[2511]: I0312 01:38:45.458961 2511 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 12 01:38:45.458988 kubelet[2511]: I0312 01:38:45.458977 2511 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 12 01:38:45.459231 kubelet[2511]: I0312 01:38:45.458990 2511 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 12 01:38:45.459231 kubelet[2511]: I0312 01:38:45.459004 2511 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 12 01:38:45.459231 kubelet[2511]: I0312 01:38:45.459018 2511 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 12 01:38:45.459231 kubelet[2511]: I0312 01:38:45.459058 2511 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/bd81bb6a14e176da833e3a8030ee5eac-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"bd81bb6a14e176da833e3a8030ee5eac\") " pod="kube-system/kube-scheduler-localhost" Mar 12 01:38:45.701253 kubelet[2511]: E0312 01:38:45.701059 2511 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:38:45.702116 kubelet[2511]: E0312 01:38:45.701057 2511 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:38:45.702116 kubelet[2511]: E0312 01:38:45.701182 2511 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:38:46.036099 kubelet[2511]: I0312 01:38:46.034038 2511 apiserver.go:52] "Watching apiserver" Mar 12 01:38:46.113745 kubelet[2511]: E0312 01:38:46.113715 2511 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:38:46.114116 kubelet[2511]: I0312 01:38:46.114102 2511 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 12 01:38:46.115493 kubelet[2511]: E0312 01:38:46.115457 2511 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:38:46.124015 kubelet[2511]: E0312 01:38:46.123985 2511 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Mar 12 01:38:46.124305 kubelet[2511]: E0312 01:38:46.124286 2511 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:38:46.146781 kubelet[2511]: I0312 01:38:46.146002 2511 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.145988412 podStartE2EDuration="1.145988412s" podCreationTimestamp="2026-03-12 01:38:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 01:38:46.14598699 +0000 UTC m=+1.299381680" watchObservedRunningTime="2026-03-12 01:38:46.145988412 +0000 UTC m=+1.299383112" Mar 12 01:38:46.152401 kubelet[2511]: I0312 01:38:46.152351 2511 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 12 01:38:46.154088 kubelet[2511]: I0312 01:38:46.154011 2511 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.154005177 podStartE2EDuration="2.154005177s" podCreationTimestamp="2026-03-12 01:38:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 01:38:46.153800614 +0000 UTC m=+1.307195305" watchObservedRunningTime="2026-03-12 01:38:46.154005177 +0000 UTC m=+1.307399868" Mar 12 01:38:46.163204 kubelet[2511]: I0312 01:38:46.163162 2511 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.1631541699999999 podStartE2EDuration="1.16315417s" podCreationTimestamp="2026-03-12 01:38:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 01:38:46.163055894 +0000 UTC m=+1.316450594" watchObservedRunningTime="2026-03-12 01:38:46.16315417 +0000 UTC m=+1.316548861" Mar 12 01:38:47.117481 kubelet[2511]: E0312 01:38:47.117357 2511 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:38:47.117481 kubelet[2511]: E0312 01:38:47.117395 2511 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:38:48.118953 kubelet[2511]: E0312 01:38:48.118905 2511 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:38:48.672245 kubelet[2511]: E0312 01:38:48.672200 2511 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:38:51.218313 kubelet[2511]: I0312 01:38:51.218229 2511 kuberuntime_manager.go:2062] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 12 01:38:51.218782 containerd[1457]: time="2026-03-12T01:38:51.218636908Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 12 01:38:51.219038 kubelet[2511]: I0312 01:38:51.218998 2511 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 12 01:38:51.946613 systemd[1]: Created slice kubepods-besteffort-podf43f777b_e7a1_4ac7_a288_8e86b996dbeb.slice - libcontainer container kubepods-besteffort-podf43f777b_e7a1_4ac7_a288_8e86b996dbeb.slice. Mar 12 01:38:52.031455 kubelet[2511]: I0312 01:38:52.031313 2511 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/f43f777b-e7a1-4ac7-a288-8e86b996dbeb-kube-proxy\") pod \"kube-proxy-7ppts\" (UID: \"f43f777b-e7a1-4ac7-a288-8e86b996dbeb\") " pod="kube-system/kube-proxy-7ppts" Mar 12 01:38:52.031455 kubelet[2511]: I0312 01:38:52.031415 2511 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-27xp9\" (UniqueName: \"kubernetes.io/projected/f43f777b-e7a1-4ac7-a288-8e86b996dbeb-kube-api-access-27xp9\") pod \"kube-proxy-7ppts\" (UID: \"f43f777b-e7a1-4ac7-a288-8e86b996dbeb\") " pod="kube-system/kube-proxy-7ppts" Mar 12 01:38:52.031615 kubelet[2511]: I0312 01:38:52.031472 2511 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f43f777b-e7a1-4ac7-a288-8e86b996dbeb-xtables-lock\") pod \"kube-proxy-7ppts\" (UID: \"f43f777b-e7a1-4ac7-a288-8e86b996dbeb\") " pod="kube-system/kube-proxy-7ppts" Mar 12 01:38:52.031615 kubelet[2511]: I0312 01:38:52.031501 2511 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f43f777b-e7a1-4ac7-a288-8e86b996dbeb-lib-modules\") pod \"kube-proxy-7ppts\" (UID: \"f43f777b-e7a1-4ac7-a288-8e86b996dbeb\") " pod="kube-system/kube-proxy-7ppts" Mar 12 01:38:52.137945 kubelet[2511]: E0312 01:38:52.137875 2511 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Mar 12 01:38:52.137945 kubelet[2511]: E0312 01:38:52.137918 2511 projected.go:196] Error preparing data for projected volume kube-api-access-27xp9 for pod kube-system/kube-proxy-7ppts: configmap "kube-root-ca.crt" not found Mar 12 01:38:52.138112 kubelet[2511]: E0312 01:38:52.137987 2511 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f43f777b-e7a1-4ac7-a288-8e86b996dbeb-kube-api-access-27xp9 podName:f43f777b-e7a1-4ac7-a288-8e86b996dbeb nodeName:}" failed. No retries permitted until 2026-03-12 01:38:52.637969589 +0000 UTC m=+7.791364279 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-27xp9" (UniqueName: "kubernetes.io/projected/f43f777b-e7a1-4ac7-a288-8e86b996dbeb-kube-api-access-27xp9") pod "kube-proxy-7ppts" (UID: "f43f777b-e7a1-4ac7-a288-8e86b996dbeb") : configmap "kube-root-ca.crt" not found Mar 12 01:38:52.482039 systemd[1]: Created slice kubepods-besteffort-pod1cd6e996_d723_42a0_bd96_7e576e17efdb.slice - libcontainer container kubepods-besteffort-pod1cd6e996_d723_42a0_bd96_7e576e17efdb.slice. Mar 12 01:38:52.536740 kubelet[2511]: I0312 01:38:52.536698 2511 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/1cd6e996-d723-42a0-bd96-7e576e17efdb-var-lib-calico\") pod \"tigera-operator-6cf4cccc57-78w82\" (UID: \"1cd6e996-d723-42a0-bd96-7e576e17efdb\") " pod="tigera-operator/tigera-operator-6cf4cccc57-78w82" Mar 12 01:38:52.536740 kubelet[2511]: I0312 01:38:52.536757 2511 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kfw48\" (UniqueName: \"kubernetes.io/projected/1cd6e996-d723-42a0-bd96-7e576e17efdb-kube-api-access-kfw48\") pod \"tigera-operator-6cf4cccc57-78w82\" (UID: \"1cd6e996-d723-42a0-bd96-7e576e17efdb\") " pod="tigera-operator/tigera-operator-6cf4cccc57-78w82" Mar 12 01:38:52.791061 containerd[1457]: time="2026-03-12T01:38:52.790923838Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6cf4cccc57-78w82,Uid:1cd6e996-d723-42a0-bd96-7e576e17efdb,Namespace:tigera-operator,Attempt:0,}" Mar 12 01:38:52.822137 containerd[1457]: time="2026-03-12T01:38:52.819979013Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 12 01:38:52.822137 containerd[1457]: time="2026-03-12T01:38:52.821713412Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 12 01:38:52.822137 containerd[1457]: time="2026-03-12T01:38:52.821778945Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 01:38:52.822137 containerd[1457]: time="2026-03-12T01:38:52.822045433Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 01:38:52.854856 systemd[1]: Started cri-containerd-c1519e43942420863ac883b70697813c7a213656bbd9827dc307f08307eddee9.scope - libcontainer container c1519e43942420863ac883b70697813c7a213656bbd9827dc307f08307eddee9. Mar 12 01:38:52.861096 kubelet[2511]: E0312 01:38:52.860295 2511 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:38:52.861503 containerd[1457]: time="2026-03-12T01:38:52.861475980Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-7ppts,Uid:f43f777b-e7a1-4ac7-a288-8e86b996dbeb,Namespace:kube-system,Attempt:0,}" Mar 12 01:38:52.863604 kubelet[2511]: E0312 01:38:52.863505 2511 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:38:52.892027 containerd[1457]: time="2026-03-12T01:38:52.891773279Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 12 01:38:52.892027 containerd[1457]: time="2026-03-12T01:38:52.891820052Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 12 01:38:52.892027 containerd[1457]: time="2026-03-12T01:38:52.891832629Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 01:38:52.892027 containerd[1457]: time="2026-03-12T01:38:52.891902170Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 01:38:52.897833 containerd[1457]: time="2026-03-12T01:38:52.897740905Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6cf4cccc57-78w82,Uid:1cd6e996-d723-42a0-bd96-7e576e17efdb,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"c1519e43942420863ac883b70697813c7a213656bbd9827dc307f08307eddee9\"" Mar 12 01:38:52.899806 containerd[1457]: time="2026-03-12T01:38:52.899749138Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\"" Mar 12 01:38:52.916934 systemd[1]: Started cri-containerd-4745e8f3739aa39f1e4daf44aa11c7ef5fa9008275b62820994e92e675e5e00f.scope - libcontainer container 4745e8f3739aa39f1e4daf44aa11c7ef5fa9008275b62820994e92e675e5e00f. Mar 12 01:38:52.941461 containerd[1457]: time="2026-03-12T01:38:52.941419256Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-7ppts,Uid:f43f777b-e7a1-4ac7-a288-8e86b996dbeb,Namespace:kube-system,Attempt:0,} returns sandbox id \"4745e8f3739aa39f1e4daf44aa11c7ef5fa9008275b62820994e92e675e5e00f\"" Mar 12 01:38:52.942543 kubelet[2511]: E0312 01:38:52.942445 2511 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:38:52.947318 containerd[1457]: time="2026-03-12T01:38:52.947285301Z" level=info msg="CreateContainer within sandbox \"4745e8f3739aa39f1e4daf44aa11c7ef5fa9008275b62820994e92e675e5e00f\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 12 01:38:52.964072 containerd[1457]: time="2026-03-12T01:38:52.964036393Z" level=info msg="CreateContainer within sandbox \"4745e8f3739aa39f1e4daf44aa11c7ef5fa9008275b62820994e92e675e5e00f\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"fcedbada19061ea50db3af691a160765f3943609e0a0265b4aa8dc179f66d3b7\"" Mar 12 01:38:52.964599 containerd[1457]: time="2026-03-12T01:38:52.964540623Z" level=info msg="StartContainer for \"fcedbada19061ea50db3af691a160765f3943609e0a0265b4aa8dc179f66d3b7\"" Mar 12 01:38:53.006890 systemd[1]: Started cri-containerd-fcedbada19061ea50db3af691a160765f3943609e0a0265b4aa8dc179f66d3b7.scope - libcontainer container fcedbada19061ea50db3af691a160765f3943609e0a0265b4aa8dc179f66d3b7. Mar 12 01:38:53.039211 containerd[1457]: time="2026-03-12T01:38:53.039124777Z" level=info msg="StartContainer for \"fcedbada19061ea50db3af691a160765f3943609e0a0265b4aa8dc179f66d3b7\" returns successfully" Mar 12 01:38:53.129097 kubelet[2511]: E0312 01:38:53.128948 2511 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:38:53.140605 kubelet[2511]: I0312 01:38:53.138533 2511 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-proxy-7ppts" podStartSLOduration=2.138513764 podStartE2EDuration="2.138513764s" podCreationTimestamp="2026-03-12 01:38:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 01:38:53.138378092 +0000 UTC m=+8.291772792" watchObservedRunningTime="2026-03-12 01:38:53.138513764 +0000 UTC m=+8.291908454" Mar 12 01:38:53.911184 kubelet[2511]: E0312 01:38:53.911106 2511 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:38:53.971036 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2174196261.mount: Deactivated successfully. Mar 12 01:38:54.641217 containerd[1457]: time="2026-03-12T01:38:54.641135442Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.40.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:38:54.642483 containerd[1457]: time="2026-03-12T01:38:54.642412727Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.40.7: active requests=0, bytes read=40846156" Mar 12 01:38:54.644551 containerd[1457]: time="2026-03-12T01:38:54.644465872Z" level=info msg="ImageCreate event name:\"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:38:54.647125 containerd[1457]: time="2026-03-12T01:38:54.647067120Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:38:54.647775 containerd[1457]: time="2026-03-12T01:38:54.647709336Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.40.7\" with image id \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\", repo tag \"quay.io/tigera/operator:v1.40.7\", repo digest \"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\", size \"40842151\" in 1.747916322s" Mar 12 01:38:54.647818 containerd[1457]: time="2026-03-12T01:38:54.647772502Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\" returns image reference \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\"" Mar 12 01:38:54.656430 containerd[1457]: time="2026-03-12T01:38:54.656395454Z" level=info msg="CreateContainer within sandbox \"c1519e43942420863ac883b70697813c7a213656bbd9827dc307f08307eddee9\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Mar 12 01:38:54.671087 containerd[1457]: time="2026-03-12T01:38:54.671050692Z" level=info msg="CreateContainer within sandbox \"c1519e43942420863ac883b70697813c7a213656bbd9827dc307f08307eddee9\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"9695802e449c5530c5bb0b4608946bb4662252409a72ccebd1b31c403d8db332\"" Mar 12 01:38:54.672572 containerd[1457]: time="2026-03-12T01:38:54.672294527Z" level=info msg="StartContainer for \"9695802e449c5530c5bb0b4608946bb4662252409a72ccebd1b31c403d8db332\"" Mar 12 01:38:54.721931 systemd[1]: Started cri-containerd-9695802e449c5530c5bb0b4608946bb4662252409a72ccebd1b31c403d8db332.scope - libcontainer container 9695802e449c5530c5bb0b4608946bb4662252409a72ccebd1b31c403d8db332. Mar 12 01:38:54.748233 containerd[1457]: time="2026-03-12T01:38:54.748108542Z" level=info msg="StartContainer for \"9695802e449c5530c5bb0b4608946bb4662252409a72ccebd1b31c403d8db332\" returns successfully" Mar 12 01:38:58.680555 kubelet[2511]: E0312 01:38:58.680461 2511 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:38:58.694226 kubelet[2511]: I0312 01:38:58.693817 2511 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="tigera-operator/tigera-operator-6cf4cccc57-78w82" podStartSLOduration=4.943842373 podStartE2EDuration="6.693800607s" podCreationTimestamp="2026-03-12 01:38:52 +0000 UTC" firstStartedPulling="2026-03-12 01:38:52.899326131 +0000 UTC m=+8.052720831" lastFinishedPulling="2026-03-12 01:38:54.649284375 +0000 UTC m=+9.802679065" observedRunningTime="2026-03-12 01:38:55.147255328 +0000 UTC m=+10.300650017" watchObservedRunningTime="2026-03-12 01:38:58.693800607 +0000 UTC m=+13.847195297" Mar 12 01:39:00.113624 sudo[1637]: pam_unix(sudo:session): session closed for user root Mar 12 01:39:00.116150 sshd[1634]: pam_unix(sshd:session): session closed for user core Mar 12 01:39:00.126623 systemd-logind[1448]: Session 7 logged out. Waiting for processes to exit. Mar 12 01:39:00.130201 systemd[1]: sshd@6-10.0.0.156:22-10.0.0.1:53694.service: Deactivated successfully. Mar 12 01:39:00.141615 systemd[1]: session-7.scope: Deactivated successfully. Mar 12 01:39:00.143295 systemd[1]: session-7.scope: Consumed 4.816s CPU time, 162.4M memory peak, 0B memory swap peak. Mar 12 01:39:00.149632 systemd-logind[1448]: Removed session 7. Mar 12 01:39:00.646496 update_engine[1451]: I20260312 01:39:00.645612 1451 update_attempter.cc:509] Updating boot flags... Mar 12 01:39:00.696707 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (2934) Mar 12 01:39:00.782708 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (2938) Mar 12 01:39:00.853710 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (2938) Mar 12 01:39:01.851956 systemd[1]: Created slice kubepods-besteffort-poda088ebcf_6335_4870_a322_2c8752ce3ad1.slice - libcontainer container kubepods-besteffort-poda088ebcf_6335_4870_a322_2c8752ce3ad1.slice. Mar 12 01:39:01.899957 systemd[1]: Created slice kubepods-besteffort-pod92413233_5684_44f2_8b9c_8cddcddcdd5e.slice - libcontainer container kubepods-besteffort-pod92413233_5684_44f2_8b9c_8cddcddcdd5e.slice. Mar 12 01:39:01.907746 kubelet[2511]: I0312 01:39:01.907706 2511 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a088ebcf-6335-4870-a322-2c8752ce3ad1-tigera-ca-bundle\") pod \"calico-typha-74bbcb8969-bw2vd\" (UID: \"a088ebcf-6335-4870-a322-2c8752ce3ad1\") " pod="calico-system/calico-typha-74bbcb8969-bw2vd" Mar 12 01:39:01.908073 kubelet[2511]: I0312 01:39:01.907749 2511 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/a088ebcf-6335-4870-a322-2c8752ce3ad1-typha-certs\") pod \"calico-typha-74bbcb8969-bw2vd\" (UID: \"a088ebcf-6335-4870-a322-2c8752ce3ad1\") " pod="calico-system/calico-typha-74bbcb8969-bw2vd" Mar 12 01:39:01.908073 kubelet[2511]: I0312 01:39:01.907782 2511 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ng8lc\" (UniqueName: \"kubernetes.io/projected/a088ebcf-6335-4870-a322-2c8752ce3ad1-kube-api-access-ng8lc\") pod \"calico-typha-74bbcb8969-bw2vd\" (UID: \"a088ebcf-6335-4870-a322-2c8752ce3ad1\") " pod="calico-system/calico-typha-74bbcb8969-bw2vd" Mar 12 01:39:02.008122 kubelet[2511]: I0312 01:39:02.008039 2511 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/92413233-5684-44f2-8b9c-8cddcddcdd5e-var-run-calico\") pod \"calico-node-h84tt\" (UID: \"92413233-5684-44f2-8b9c-8cddcddcdd5e\") " pod="calico-system/calico-node-h84tt" Mar 12 01:39:02.008122 kubelet[2511]: I0312 01:39:02.008089 2511 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/92413233-5684-44f2-8b9c-8cddcddcdd5e-var-lib-calico\") pod \"calico-node-h84tt\" (UID: \"92413233-5684-44f2-8b9c-8cddcddcdd5e\") " pod="calico-system/calico-node-h84tt" Mar 12 01:39:02.008122 kubelet[2511]: I0312 01:39:02.008118 2511 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/92413233-5684-44f2-8b9c-8cddcddcdd5e-flexvol-driver-host\") pod \"calico-node-h84tt\" (UID: \"92413233-5684-44f2-8b9c-8cddcddcdd5e\") " pod="calico-system/calico-node-h84tt" Mar 12 01:39:02.013010 kubelet[2511]: I0312 01:39:02.008143 2511 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nodeproc\" (UniqueName: \"kubernetes.io/host-path/92413233-5684-44f2-8b9c-8cddcddcdd5e-nodeproc\") pod \"calico-node-h84tt\" (UID: \"92413233-5684-44f2-8b9c-8cddcddcdd5e\") " pod="calico-system/calico-node-h84tt" Mar 12 01:39:02.013010 kubelet[2511]: I0312 01:39:02.008167 2511 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/92413233-5684-44f2-8b9c-8cddcddcdd5e-bpffs\") pod \"calico-node-h84tt\" (UID: \"92413233-5684-44f2-8b9c-8cddcddcdd5e\") " pod="calico-system/calico-node-h84tt" Mar 12 01:39:02.013010 kubelet[2511]: I0312 01:39:02.008189 2511 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/92413233-5684-44f2-8b9c-8cddcddcdd5e-cni-log-dir\") pod \"calico-node-h84tt\" (UID: \"92413233-5684-44f2-8b9c-8cddcddcdd5e\") " pod="calico-system/calico-node-h84tt" Mar 12 01:39:02.013010 kubelet[2511]: I0312 01:39:02.008214 2511 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/92413233-5684-44f2-8b9c-8cddcddcdd5e-node-certs\") pod \"calico-node-h84tt\" (UID: \"92413233-5684-44f2-8b9c-8cddcddcdd5e\") " pod="calico-system/calico-node-h84tt" Mar 12 01:39:02.013010 kubelet[2511]: I0312 01:39:02.008241 2511 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/92413233-5684-44f2-8b9c-8cddcddcdd5e-cni-net-dir\") pod \"calico-node-h84tt\" (UID: \"92413233-5684-44f2-8b9c-8cddcddcdd5e\") " pod="calico-system/calico-node-h84tt" Mar 12 01:39:02.013010 kubelet[2511]: I0312 01:39:02.008254 2511 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/92413233-5684-44f2-8b9c-8cddcddcdd5e-xtables-lock\") pod \"calico-node-h84tt\" (UID: \"92413233-5684-44f2-8b9c-8cddcddcdd5e\") " pod="calico-system/calico-node-h84tt" Mar 12 01:39:02.013136 kubelet[2511]: I0312 01:39:02.009710 2511 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/92413233-5684-44f2-8b9c-8cddcddcdd5e-tigera-ca-bundle\") pod \"calico-node-h84tt\" (UID: \"92413233-5684-44f2-8b9c-8cddcddcdd5e\") " pod="calico-system/calico-node-h84tt" Mar 12 01:39:02.013136 kubelet[2511]: I0312 01:39:02.009764 2511 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4q8sj\" (UniqueName: \"kubernetes.io/projected/92413233-5684-44f2-8b9c-8cddcddcdd5e-kube-api-access-4q8sj\") pod \"calico-node-h84tt\" (UID: \"92413233-5684-44f2-8b9c-8cddcddcdd5e\") " pod="calico-system/calico-node-h84tt" Mar 12 01:39:02.013136 kubelet[2511]: I0312 01:39:02.009830 2511 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/92413233-5684-44f2-8b9c-8cddcddcdd5e-lib-modules\") pod \"calico-node-h84tt\" (UID: \"92413233-5684-44f2-8b9c-8cddcddcdd5e\") " pod="calico-system/calico-node-h84tt" Mar 12 01:39:02.013136 kubelet[2511]: I0312 01:39:02.010141 2511 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/92413233-5684-44f2-8b9c-8cddcddcdd5e-cni-bin-dir\") pod \"calico-node-h84tt\" (UID: \"92413233-5684-44f2-8b9c-8cddcddcdd5e\") " pod="calico-system/calico-node-h84tt" Mar 12 01:39:02.013136 kubelet[2511]: I0312 01:39:02.010368 2511 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/92413233-5684-44f2-8b9c-8cddcddcdd5e-policysync\") pod \"calico-node-h84tt\" (UID: \"92413233-5684-44f2-8b9c-8cddcddcdd5e\") " pod="calico-system/calico-node-h84tt" Mar 12 01:39:02.013233 kubelet[2511]: I0312 01:39:02.010389 2511 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/92413233-5684-44f2-8b9c-8cddcddcdd5e-sys-fs\") pod \"calico-node-h84tt\" (UID: \"92413233-5684-44f2-8b9c-8cddcddcdd5e\") " pod="calico-system/calico-node-h84tt" Mar 12 01:39:02.058148 kubelet[2511]: E0312 01:39:02.057865 2511 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r96sx" podUID="4350a8ed-9db0-4145-8365-af9918373d13" Mar 12 01:39:02.112249 kubelet[2511]: I0312 01:39:02.112085 2511 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/4350a8ed-9db0-4145-8365-af9918373d13-varrun\") pod \"csi-node-driver-r96sx\" (UID: \"4350a8ed-9db0-4145-8365-af9918373d13\") " pod="calico-system/csi-node-driver-r96sx" Mar 12 01:39:02.112249 kubelet[2511]: I0312 01:39:02.112231 2511 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4350a8ed-9db0-4145-8365-af9918373d13-kubelet-dir\") pod \"csi-node-driver-r96sx\" (UID: \"4350a8ed-9db0-4145-8365-af9918373d13\") " pod="calico-system/csi-node-driver-r96sx" Mar 12 01:39:02.112461 kubelet[2511]: I0312 01:39:02.112255 2511 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/4350a8ed-9db0-4145-8365-af9918373d13-registration-dir\") pod \"csi-node-driver-r96sx\" (UID: \"4350a8ed-9db0-4145-8365-af9918373d13\") " pod="calico-system/csi-node-driver-r96sx" Mar 12 01:39:02.112461 kubelet[2511]: I0312 01:39:02.112283 2511 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nk6tc\" (UniqueName: \"kubernetes.io/projected/4350a8ed-9db0-4145-8365-af9918373d13-kube-api-access-nk6tc\") pod \"csi-node-driver-r96sx\" (UID: \"4350a8ed-9db0-4145-8365-af9918373d13\") " pod="calico-system/csi-node-driver-r96sx" Mar 12 01:39:02.112461 kubelet[2511]: I0312 01:39:02.112331 2511 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/4350a8ed-9db0-4145-8365-af9918373d13-socket-dir\") pod \"csi-node-driver-r96sx\" (UID: \"4350a8ed-9db0-4145-8365-af9918373d13\") " pod="calico-system/csi-node-driver-r96sx" Mar 12 01:39:02.118927 kubelet[2511]: E0312 01:39:02.118896 2511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:39:02.118927 kubelet[2511]: W0312 01:39:02.118924 2511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:39:02.119019 kubelet[2511]: E0312 01:39:02.118948 2511 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:39:02.122011 kubelet[2511]: E0312 01:39:02.121980 2511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:39:02.122011 kubelet[2511]: W0312 01:39:02.122008 2511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:39:02.122086 kubelet[2511]: E0312 01:39:02.122023 2511 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:39:02.182694 kubelet[2511]: E0312 01:39:02.182594 2511 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:39:02.183869 containerd[1457]: time="2026-03-12T01:39:02.183415617Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-74bbcb8969-bw2vd,Uid:a088ebcf-6335-4870-a322-2c8752ce3ad1,Namespace:calico-system,Attempt:0,}" Mar 12 01:39:02.206330 containerd[1457]: time="2026-03-12T01:39:02.206277922Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-h84tt,Uid:92413233-5684-44f2-8b9c-8cddcddcdd5e,Namespace:calico-system,Attempt:0,}" Mar 12 01:39:02.211993 containerd[1457]: time="2026-03-12T01:39:02.211822874Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 12 01:39:02.211993 containerd[1457]: time="2026-03-12T01:39:02.211896706Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 12 01:39:02.211993 containerd[1457]: time="2026-03-12T01:39:02.211910444Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 01:39:02.212157 containerd[1457]: time="2026-03-12T01:39:02.211990879Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 01:39:02.213611 kubelet[2511]: E0312 01:39:02.213590 2511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:39:02.213980 kubelet[2511]: W0312 01:39:02.213796 2511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:39:02.213980 kubelet[2511]: E0312 01:39:02.213894 2511 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:39:02.214887 kubelet[2511]: E0312 01:39:02.214714 2511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:39:02.214887 kubelet[2511]: W0312 01:39:02.214727 2511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:39:02.214887 kubelet[2511]: E0312 01:39:02.214737 2511 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:39:02.215234 kubelet[2511]: E0312 01:39:02.215223 2511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:39:02.215349 kubelet[2511]: W0312 01:39:02.215337 2511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:39:02.215489 kubelet[2511]: E0312 01:39:02.215476 2511 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:39:02.215975 kubelet[2511]: E0312 01:39:02.215964 2511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:39:02.216258 kubelet[2511]: W0312 01:39:02.216095 2511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:39:02.216258 kubelet[2511]: E0312 01:39:02.216112 2511 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:39:02.216529 kubelet[2511]: E0312 01:39:02.216517 2511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:39:02.216739 kubelet[2511]: W0312 01:39:02.216724 2511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:39:02.216880 kubelet[2511]: E0312 01:39:02.216863 2511 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:39:02.217372 kubelet[2511]: E0312 01:39:02.217321 2511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:39:02.217529 kubelet[2511]: W0312 01:39:02.217331 2511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:39:02.217529 kubelet[2511]: E0312 01:39:02.217481 2511 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:39:02.218187 kubelet[2511]: E0312 01:39:02.218043 2511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:39:02.218187 kubelet[2511]: W0312 01:39:02.218053 2511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:39:02.218187 kubelet[2511]: E0312 01:39:02.218063 2511 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:39:02.218612 kubelet[2511]: E0312 01:39:02.218600 2511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:39:02.218739 kubelet[2511]: W0312 01:39:02.218714 2511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:39:02.218739 kubelet[2511]: E0312 01:39:02.218728 2511 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:39:02.219274 kubelet[2511]: E0312 01:39:02.219262 2511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:39:02.219357 kubelet[2511]: W0312 01:39:02.219344 2511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:39:02.219480 kubelet[2511]: E0312 01:39:02.219467 2511 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:39:02.220193 kubelet[2511]: E0312 01:39:02.220103 2511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:39:02.220193 kubelet[2511]: W0312 01:39:02.220121 2511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:39:02.220565 kubelet[2511]: E0312 01:39:02.220136 2511 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:39:02.220912 kubelet[2511]: E0312 01:39:02.220893 2511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:39:02.221059 kubelet[2511]: W0312 01:39:02.221031 2511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:39:02.221059 kubelet[2511]: E0312 01:39:02.221047 2511 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:39:02.221777 kubelet[2511]: E0312 01:39:02.221577 2511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:39:02.221777 kubelet[2511]: W0312 01:39:02.221592 2511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:39:02.221777 kubelet[2511]: E0312 01:39:02.221606 2511 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:39:02.222062 kubelet[2511]: E0312 01:39:02.222048 2511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:39:02.222144 kubelet[2511]: W0312 01:39:02.222119 2511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:39:02.222144 kubelet[2511]: E0312 01:39:02.222133 2511 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:39:02.222582 kubelet[2511]: E0312 01:39:02.222549 2511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:39:02.222582 kubelet[2511]: W0312 01:39:02.222562 2511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:39:02.222582 kubelet[2511]: E0312 01:39:02.222571 2511 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:39:02.223269 kubelet[2511]: E0312 01:39:02.223120 2511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:39:02.223269 kubelet[2511]: W0312 01:39:02.223131 2511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:39:02.223269 kubelet[2511]: E0312 01:39:02.223140 2511 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:39:02.223632 kubelet[2511]: E0312 01:39:02.223563 2511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:39:02.223632 kubelet[2511]: W0312 01:39:02.223573 2511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:39:02.223632 kubelet[2511]: E0312 01:39:02.223582 2511 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:39:02.224021 kubelet[2511]: E0312 01:39:02.224010 2511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:39:02.224118 kubelet[2511]: W0312 01:39:02.224063 2511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:39:02.224118 kubelet[2511]: E0312 01:39:02.224075 2511 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:39:02.224532 kubelet[2511]: E0312 01:39:02.224520 2511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:39:02.224597 kubelet[2511]: W0312 01:39:02.224575 2511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:39:02.224597 kubelet[2511]: E0312 01:39:02.224586 2511 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:39:02.225198 kubelet[2511]: E0312 01:39:02.225064 2511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:39:02.225198 kubelet[2511]: W0312 01:39:02.225075 2511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:39:02.225198 kubelet[2511]: E0312 01:39:02.225084 2511 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:39:02.225507 kubelet[2511]: E0312 01:39:02.225494 2511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:39:02.225612 kubelet[2511]: W0312 01:39:02.225556 2511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:39:02.225612 kubelet[2511]: E0312 01:39:02.225568 2511 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:39:02.226297 kubelet[2511]: E0312 01:39:02.226079 2511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:39:02.226297 kubelet[2511]: W0312 01:39:02.226091 2511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:39:02.226297 kubelet[2511]: E0312 01:39:02.226099 2511 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:39:02.227188 kubelet[2511]: E0312 01:39:02.227175 2511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:39:02.227246 kubelet[2511]: W0312 01:39:02.227235 2511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:39:02.227292 kubelet[2511]: E0312 01:39:02.227279 2511 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:39:02.228112 kubelet[2511]: E0312 01:39:02.228098 2511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:39:02.228171 kubelet[2511]: W0312 01:39:02.228161 2511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:39:02.228216 kubelet[2511]: E0312 01:39:02.228203 2511 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:39:02.228808 kubelet[2511]: E0312 01:39:02.228795 2511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:39:02.228878 kubelet[2511]: W0312 01:39:02.228865 2511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:39:02.229275 kubelet[2511]: E0312 01:39:02.228917 2511 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:39:02.229520 kubelet[2511]: E0312 01:39:02.229508 2511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:39:02.229582 kubelet[2511]: W0312 01:39:02.229570 2511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:39:02.229639 kubelet[2511]: E0312 01:39:02.229615 2511 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:39:02.238554 kubelet[2511]: E0312 01:39:02.238532 2511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:39:02.238792 kubelet[2511]: W0312 01:39:02.238776 2511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:39:02.238924 kubelet[2511]: E0312 01:39:02.238909 2511 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:39:02.240831 systemd[1]: Started cri-containerd-7b2a8921de17078ee4de032e6b2e72b95050893b749f0a154bc814b340242c34.scope - libcontainer container 7b2a8921de17078ee4de032e6b2e72b95050893b749f0a154bc814b340242c34. Mar 12 01:39:02.244128 containerd[1457]: time="2026-03-12T01:39:02.242701786Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 12 01:39:02.244128 containerd[1457]: time="2026-03-12T01:39:02.242760688Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 12 01:39:02.244128 containerd[1457]: time="2026-03-12T01:39:02.242771319Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 01:39:02.244128 containerd[1457]: time="2026-03-12T01:39:02.242848698Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 01:39:02.265953 systemd[1]: Started cri-containerd-fd1eb73d0bec7b9da205b3c02b3c1c7a2a1bd842b3fc942f13af93617d9d971a.scope - libcontainer container fd1eb73d0bec7b9da205b3c02b3c1c7a2a1bd842b3fc942f13af93617d9d971a. Mar 12 01:39:02.296392 containerd[1457]: time="2026-03-12T01:39:02.296330585Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-h84tt,Uid:92413233-5684-44f2-8b9c-8cddcddcdd5e,Namespace:calico-system,Attempt:0,} returns sandbox id \"fd1eb73d0bec7b9da205b3c02b3c1c7a2a1bd842b3fc942f13af93617d9d971a\"" Mar 12 01:39:02.299529 containerd[1457]: time="2026-03-12T01:39:02.299430342Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\"" Mar 12 01:39:02.302478 containerd[1457]: time="2026-03-12T01:39:02.302406066Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-74bbcb8969-bw2vd,Uid:a088ebcf-6335-4870-a322-2c8752ce3ad1,Namespace:calico-system,Attempt:0,} returns sandbox id \"7b2a8921de17078ee4de032e6b2e72b95050893b749f0a154bc814b340242c34\"" Mar 12 01:39:02.304930 kubelet[2511]: E0312 01:39:02.304853 2511 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:39:02.836805 containerd[1457]: time="2026-03-12T01:39:02.836218230Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:39:02.837329 containerd[1457]: time="2026-03-12T01:39:02.837285635Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4: active requests=0, bytes read=6186433" Mar 12 01:39:02.838365 containerd[1457]: time="2026-03-12T01:39:02.838321226Z" level=info msg="ImageCreate event name:\"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:39:02.841301 containerd[1457]: time="2026-03-12T01:39:02.841234452Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:39:02.842104 containerd[1457]: time="2026-03-12T01:39:02.841996663Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" with image id \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\", size \"6186255\" in 542.468381ms" Mar 12 01:39:02.842104 containerd[1457]: time="2026-03-12T01:39:02.842044832Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" returns image reference \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\"" Mar 12 01:39:02.843958 containerd[1457]: time="2026-03-12T01:39:02.843830618Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\"" Mar 12 01:39:02.848455 containerd[1457]: time="2026-03-12T01:39:02.848398610Z" level=info msg="CreateContainer within sandbox \"fd1eb73d0bec7b9da205b3c02b3c1c7a2a1bd842b3fc942f13af93617d9d971a\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Mar 12 01:39:02.864221 containerd[1457]: time="2026-03-12T01:39:02.864154504Z" level=info msg="CreateContainer within sandbox \"fd1eb73d0bec7b9da205b3c02b3c1c7a2a1bd842b3fc942f13af93617d9d971a\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"8bb21f29b03504d7a1d7b01b285c50597d4f3814932214f248db202b47780860\"" Mar 12 01:39:02.864846 containerd[1457]: time="2026-03-12T01:39:02.864782330Z" level=info msg="StartContainer for \"8bb21f29b03504d7a1d7b01b285c50597d4f3814932214f248db202b47780860\"" Mar 12 01:39:02.874111 kubelet[2511]: E0312 01:39:02.874030 2511 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:39:02.880743 kubelet[2511]: E0312 01:39:02.880709 2511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:39:02.880743 kubelet[2511]: W0312 01:39:02.880739 2511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:39:02.881412 kubelet[2511]: E0312 01:39:02.880791 2511 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:39:02.881516 kubelet[2511]: E0312 01:39:02.881440 2511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:39:02.881516 kubelet[2511]: W0312 01:39:02.881451 2511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:39:02.881516 kubelet[2511]: E0312 01:39:02.881461 2511 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:39:02.882293 kubelet[2511]: E0312 01:39:02.882273 2511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:39:02.882352 kubelet[2511]: W0312 01:39:02.882293 2511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:39:02.882352 kubelet[2511]: E0312 01:39:02.882314 2511 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:39:02.882992 kubelet[2511]: E0312 01:39:02.882937 2511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:39:02.882992 kubelet[2511]: W0312 01:39:02.882963 2511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:39:02.882992 kubelet[2511]: E0312 01:39:02.882973 2511 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:39:02.883382 kubelet[2511]: E0312 01:39:02.883364 2511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:39:02.883382 kubelet[2511]: W0312 01:39:02.883375 2511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:39:02.883428 kubelet[2511]: E0312 01:39:02.883384 2511 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:39:02.902815 systemd[1]: Started cri-containerd-8bb21f29b03504d7a1d7b01b285c50597d4f3814932214f248db202b47780860.scope - libcontainer container 8bb21f29b03504d7a1d7b01b285c50597d4f3814932214f248db202b47780860. Mar 12 01:39:02.935041 containerd[1457]: time="2026-03-12T01:39:02.934974786Z" level=info msg="StartContainer for \"8bb21f29b03504d7a1d7b01b285c50597d4f3814932214f248db202b47780860\" returns successfully" Mar 12 01:39:02.949506 systemd[1]: cri-containerd-8bb21f29b03504d7a1d7b01b285c50597d4f3814932214f248db202b47780860.scope: Deactivated successfully. Mar 12 01:39:02.994281 containerd[1457]: time="2026-03-12T01:39:02.994192747Z" level=info msg="shim disconnected" id=8bb21f29b03504d7a1d7b01b285c50597d4f3814932214f248db202b47780860 namespace=k8s.io Mar 12 01:39:02.994281 containerd[1457]: time="2026-03-12T01:39:02.994261448Z" level=warning msg="cleaning up after shim disconnected" id=8bb21f29b03504d7a1d7b01b285c50597d4f3814932214f248db202b47780860 namespace=k8s.io Mar 12 01:39:02.994281 containerd[1457]: time="2026-03-12T01:39:02.994271167Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 12 01:39:03.913863 kubelet[2511]: E0312 01:39:03.913520 2511 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:39:04.086308 kubelet[2511]: E0312 01:39:04.086183 2511 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r96sx" podUID="4350a8ed-9db0-4145-8365-af9918373d13" Mar 12 01:39:05.546481 containerd[1457]: time="2026-03-12T01:39:05.546435269Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:39:05.547293 containerd[1457]: time="2026-03-12T01:39:05.547234849Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.31.4: active requests=0, bytes read=34551413" Mar 12 01:39:05.548584 containerd[1457]: time="2026-03-12T01:39:05.548539789Z" level=info msg="ImageCreate event name:\"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:39:05.552974 containerd[1457]: time="2026-03-12T01:39:05.552584128Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:39:05.554225 containerd[1457]: time="2026-03-12T01:39:05.554172547Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.31.4\" with image id \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\", size \"36107450\" in 2.710314412s" Mar 12 01:39:05.554225 containerd[1457]: time="2026-03-12T01:39:05.554211585Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\" returns image reference \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\"" Mar 12 01:39:05.555348 containerd[1457]: time="2026-03-12T01:39:05.555271988Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\"" Mar 12 01:39:05.566878 containerd[1457]: time="2026-03-12T01:39:05.566820088Z" level=info msg="CreateContainer within sandbox \"7b2a8921de17078ee4de032e6b2e72b95050893b749f0a154bc814b340242c34\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Mar 12 01:39:05.582577 containerd[1457]: time="2026-03-12T01:39:05.582537574Z" level=info msg="CreateContainer within sandbox \"7b2a8921de17078ee4de032e6b2e72b95050893b749f0a154bc814b340242c34\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"45b50704c1984207d5b72b18f643fd14e8eee0a7430883090a96ca19a0010745\"" Mar 12 01:39:05.583247 containerd[1457]: time="2026-03-12T01:39:05.583123162Z" level=info msg="StartContainer for \"45b50704c1984207d5b72b18f643fd14e8eee0a7430883090a96ca19a0010745\"" Mar 12 01:39:05.620842 systemd[1]: Started cri-containerd-45b50704c1984207d5b72b18f643fd14e8eee0a7430883090a96ca19a0010745.scope - libcontainer container 45b50704c1984207d5b72b18f643fd14e8eee0a7430883090a96ca19a0010745. Mar 12 01:39:05.669418 containerd[1457]: time="2026-03-12T01:39:05.669245269Z" level=info msg="StartContainer for \"45b50704c1984207d5b72b18f643fd14e8eee0a7430883090a96ca19a0010745\" returns successfully" Mar 12 01:39:06.085617 kubelet[2511]: E0312 01:39:06.085452 2511 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r96sx" podUID="4350a8ed-9db0-4145-8365-af9918373d13" Mar 12 01:39:06.170992 kubelet[2511]: E0312 01:39:06.170818 2511 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:39:06.184858 kubelet[2511]: I0312 01:39:06.184477 2511 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-typha-74bbcb8969-bw2vd" podStartSLOduration=1.9360772050000001 podStartE2EDuration="5.184465866s" podCreationTimestamp="2026-03-12 01:39:01 +0000 UTC" firstStartedPulling="2026-03-12 01:39:02.306787857 +0000 UTC m=+17.460182557" lastFinishedPulling="2026-03-12 01:39:05.555176528 +0000 UTC m=+20.708571218" observedRunningTime="2026-03-12 01:39:06.184166584 +0000 UTC m=+21.337561284" watchObservedRunningTime="2026-03-12 01:39:06.184465866 +0000 UTC m=+21.337860556" Mar 12 01:39:07.174142 kubelet[2511]: E0312 01:39:07.173192 2511 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:39:08.085328 kubelet[2511]: E0312 01:39:08.085219 2511 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r96sx" podUID="4350a8ed-9db0-4145-8365-af9918373d13" Mar 12 01:39:08.174917 kubelet[2511]: E0312 01:39:08.174833 2511 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:39:09.813976 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1221845874.mount: Deactivated successfully. Mar 12 01:39:10.076181 containerd[1457]: time="2026-03-12T01:39:10.076025213Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:39:10.076942 containerd[1457]: time="2026-03-12T01:39:10.076888726Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.31.4: active requests=0, bytes read=159838564" Mar 12 01:39:10.085227 containerd[1457]: time="2026-03-12T01:39:10.085192227Z" level=info msg="ImageCreate event name:\"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:39:10.086115 kubelet[2511]: E0312 01:39:10.085321 2511 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r96sx" podUID="4350a8ed-9db0-4145-8365-af9918373d13" Mar 12 01:39:10.090471 containerd[1457]: time="2026-03-12T01:39:10.090339761Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:39:10.091495 containerd[1457]: time="2026-03-12T01:39:10.091427314Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.31.4\" with image id \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\", repo tag \"ghcr.io/flatcar/calico/node:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\", size \"159838426\" in 4.536101258s" Mar 12 01:39:10.091577 containerd[1457]: time="2026-03-12T01:39:10.091539227Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\" returns image reference \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\"" Mar 12 01:39:10.121716 containerd[1457]: time="2026-03-12T01:39:10.121581821Z" level=info msg="CreateContainer within sandbox \"fd1eb73d0bec7b9da205b3c02b3c1c7a2a1bd842b3fc942f13af93617d9d971a\" for container &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,}" Mar 12 01:39:10.150163 containerd[1457]: time="2026-03-12T01:39:10.150110436Z" level=info msg="CreateContainer within sandbox \"fd1eb73d0bec7b9da205b3c02b3c1c7a2a1bd842b3fc942f13af93617d9d971a\" for &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,} returns container id \"c3b6709ec933caa6fdbfddab4e09e637686c61c9e87f770aa51c794a76b3065e\"" Mar 12 01:39:10.150839 containerd[1457]: time="2026-03-12T01:39:10.150796262Z" level=info msg="StartContainer for \"c3b6709ec933caa6fdbfddab4e09e637686c61c9e87f770aa51c794a76b3065e\"" Mar 12 01:39:10.225872 systemd[1]: Started cri-containerd-c3b6709ec933caa6fdbfddab4e09e637686c61c9e87f770aa51c794a76b3065e.scope - libcontainer container c3b6709ec933caa6fdbfddab4e09e637686c61c9e87f770aa51c794a76b3065e. Mar 12 01:39:10.287283 containerd[1457]: time="2026-03-12T01:39:10.287245782Z" level=info msg="StartContainer for \"c3b6709ec933caa6fdbfddab4e09e637686c61c9e87f770aa51c794a76b3065e\" returns successfully" Mar 12 01:39:10.331543 systemd[1]: cri-containerd-c3b6709ec933caa6fdbfddab4e09e637686c61c9e87f770aa51c794a76b3065e.scope: Deactivated successfully. Mar 12 01:39:10.381459 containerd[1457]: time="2026-03-12T01:39:10.381249115Z" level=info msg="shim disconnected" id=c3b6709ec933caa6fdbfddab4e09e637686c61c9e87f770aa51c794a76b3065e namespace=k8s.io Mar 12 01:39:10.381459 containerd[1457]: time="2026-03-12T01:39:10.381291909Z" level=warning msg="cleaning up after shim disconnected" id=c3b6709ec933caa6fdbfddab4e09e637686c61c9e87f770aa51c794a76b3065e namespace=k8s.io Mar 12 01:39:10.381459 containerd[1457]: time="2026-03-12T01:39:10.381300316Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 12 01:39:10.814194 systemd[1]: run-containerd-runc-k8s.io-c3b6709ec933caa6fdbfddab4e09e637686c61c9e87f770aa51c794a76b3065e-runc.KtmMW1.mount: Deactivated successfully. Mar 12 01:39:10.814311 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c3b6709ec933caa6fdbfddab4e09e637686c61c9e87f770aa51c794a76b3065e-rootfs.mount: Deactivated successfully. Mar 12 01:39:11.221625 containerd[1457]: time="2026-03-12T01:39:11.221414308Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\"" Mar 12 01:39:12.086143 kubelet[2511]: E0312 01:39:12.086048 2511 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r96sx" podUID="4350a8ed-9db0-4145-8365-af9918373d13" Mar 12 01:39:14.087590 kubelet[2511]: E0312 01:39:14.085980 2511 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r96sx" podUID="4350a8ed-9db0-4145-8365-af9918373d13" Mar 12 01:39:15.034504 containerd[1457]: time="2026-03-12T01:39:15.034379564Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:39:15.035582 containerd[1457]: time="2026-03-12T01:39:15.035520699Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.31.4: active requests=0, bytes read=70611671" Mar 12 01:39:15.041059 containerd[1457]: time="2026-03-12T01:39:15.040980946Z" level=info msg="ImageCreate event name:\"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:39:15.044030 containerd[1457]: time="2026-03-12T01:39:15.043980613Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:39:15.045847 containerd[1457]: time="2026-03-12T01:39:15.045798594Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.31.4\" with image id \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\", repo tag \"ghcr.io/flatcar/calico/cni:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\", size \"72167716\" in 3.824280431s" Mar 12 01:39:15.045847 containerd[1457]: time="2026-03-12T01:39:15.045856768Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\" returns image reference \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\"" Mar 12 01:39:15.054246 containerd[1457]: time="2026-03-12T01:39:15.054055704Z" level=info msg="CreateContainer within sandbox \"fd1eb73d0bec7b9da205b3c02b3c1c7a2a1bd842b3fc942f13af93617d9d971a\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Mar 12 01:39:15.073549 containerd[1457]: time="2026-03-12T01:39:15.073457221Z" level=info msg="CreateContainer within sandbox \"fd1eb73d0bec7b9da205b3c02b3c1c7a2a1bd842b3fc942f13af93617d9d971a\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"7ae46a49c9ecc67af399d7709332b5ee7163d41abad52cf5a70c13950c9da916\"" Mar 12 01:39:15.074351 containerd[1457]: time="2026-03-12T01:39:15.074313178Z" level=info msg="StartContainer for \"7ae46a49c9ecc67af399d7709332b5ee7163d41abad52cf5a70c13950c9da916\"" Mar 12 01:39:15.135821 systemd[1]: Started cri-containerd-7ae46a49c9ecc67af399d7709332b5ee7163d41abad52cf5a70c13950c9da916.scope - libcontainer container 7ae46a49c9ecc67af399d7709332b5ee7163d41abad52cf5a70c13950c9da916. Mar 12 01:39:15.203467 containerd[1457]: time="2026-03-12T01:39:15.203382410Z" level=info msg="StartContainer for \"7ae46a49c9ecc67af399d7709332b5ee7163d41abad52cf5a70c13950c9da916\" returns successfully" Mar 12 01:39:15.846811 systemd[1]: cri-containerd-7ae46a49c9ecc67af399d7709332b5ee7163d41abad52cf5a70c13950c9da916.scope: Deactivated successfully. Mar 12 01:39:15.875227 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7ae46a49c9ecc67af399d7709332b5ee7163d41abad52cf5a70c13950c9da916-rootfs.mount: Deactivated successfully. Mar 12 01:39:15.880074 containerd[1457]: time="2026-03-12T01:39:15.880032630Z" level=info msg="shim disconnected" id=7ae46a49c9ecc67af399d7709332b5ee7163d41abad52cf5a70c13950c9da916 namespace=k8s.io Mar 12 01:39:15.881093 containerd[1457]: time="2026-03-12T01:39:15.881030284Z" level=warning msg="cleaning up after shim disconnected" id=7ae46a49c9ecc67af399d7709332b5ee7163d41abad52cf5a70c13950c9da916 namespace=k8s.io Mar 12 01:39:15.881093 containerd[1457]: time="2026-03-12T01:39:15.881066296Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 12 01:39:15.881597 kubelet[2511]: I0312 01:39:15.881554 2511 kubelet_node_status.go:427] "Fast updating node status as it just became ready" Mar 12 01:39:15.940029 systemd[1]: Created slice kubepods-besteffort-podce2d517a_d1f9_487b_8064_201ed4846645.slice - libcontainer container kubepods-besteffort-podce2d517a_d1f9_487b_8064_201ed4846645.slice. Mar 12 01:39:15.956503 systemd[1]: Created slice kubepods-burstable-pod0da94b48_bfde_434e_b905_92be304e2a09.slice - libcontainer container kubepods-burstable-pod0da94b48_bfde_434e_b905_92be304e2a09.slice. Mar 12 01:39:15.966920 systemd[1]: Created slice kubepods-burstable-pod6ac744d1_2bf6_4481_882f_f786a1600883.slice - libcontainer container kubepods-burstable-pod6ac744d1_2bf6_4481_882f_f786a1600883.slice. Mar 12 01:39:15.971867 systemd[1]: Created slice kubepods-besteffort-pod1d5c8986_3e41_46b9_9e79_5445416fc70d.slice - libcontainer container kubepods-besteffort-pod1d5c8986_3e41_46b9_9e79_5445416fc70d.slice. Mar 12 01:39:15.982910 systemd[1]: Created slice kubepods-besteffort-pod6337dcae_e8ff_47d2_900a_5c71524380d4.slice - libcontainer container kubepods-besteffort-pod6337dcae_e8ff_47d2_900a_5c71524380d4.slice. Mar 12 01:39:15.991289 systemd[1]: Created slice kubepods-besteffort-pod59eed31b_e704_4270_b07c_3c68fa6fc47c.slice - libcontainer container kubepods-besteffort-pod59eed31b_e704_4270_b07c_3c68fa6fc47c.slice. Mar 12 01:39:16.000565 systemd[1]: Created slice kubepods-besteffort-pod26e1b944_92fc_431c_8345_7c464f156745.slice - libcontainer container kubepods-besteffort-pod26e1b944_92fc_431c_8345_7c464f156745.slice. Mar 12 01:39:16.030705 kubelet[2511]: I0312 01:39:16.030584 2511 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0da94b48-bfde-434e-b905-92be304e2a09-config-volume\") pod \"coredns-7d764666f9-tp6j8\" (UID: \"0da94b48-bfde-434e-b905-92be304e2a09\") " pod="kube-system/coredns-7d764666f9-tp6j8" Mar 12 01:39:16.030705 kubelet[2511]: I0312 01:39:16.030680 2511 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mhc6w\" (UniqueName: \"kubernetes.io/projected/0da94b48-bfde-434e-b905-92be304e2a09-kube-api-access-mhc6w\") pod \"coredns-7d764666f9-tp6j8\" (UID: \"0da94b48-bfde-434e-b905-92be304e2a09\") " pod="kube-system/coredns-7d764666f9-tp6j8" Mar 12 01:39:16.030705 kubelet[2511]: I0312 01:39:16.030701 2511 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jrs2d\" (UniqueName: \"kubernetes.io/projected/1d5c8986-3e41-46b9-9e79-5445416fc70d-kube-api-access-jrs2d\") pod \"whisker-866854587-hn6q9\" (UID: \"1d5c8986-3e41-46b9-9e79-5445416fc70d\") " pod="calico-system/whisker-866854587-hn6q9" Mar 12 01:39:16.031042 kubelet[2511]: I0312 01:39:16.030719 2511 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/1d5c8986-3e41-46b9-9e79-5445416fc70d-whisker-backend-key-pair\") pod \"whisker-866854587-hn6q9\" (UID: \"1d5c8986-3e41-46b9-9e79-5445416fc70d\") " pod="calico-system/whisker-866854587-hn6q9" Mar 12 01:39:16.031042 kubelet[2511]: I0312 01:39:16.030734 2511 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6337dcae-e8ff-47d2-900a-5c71524380d4-config\") pod \"goldmane-9f7667bb8-xzc4r\" (UID: \"6337dcae-e8ff-47d2-900a-5c71524380d4\") " pod="calico-system/goldmane-9f7667bb8-xzc4r" Mar 12 01:39:16.031042 kubelet[2511]: I0312 01:39:16.030747 2511 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/6337dcae-e8ff-47d2-900a-5c71524380d4-goldmane-key-pair\") pod \"goldmane-9f7667bb8-xzc4r\" (UID: \"6337dcae-e8ff-47d2-900a-5c71524380d4\") " pod="calico-system/goldmane-9f7667bb8-xzc4r" Mar 12 01:39:16.031042 kubelet[2511]: I0312 01:39:16.030762 2511 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q62ll\" (UniqueName: \"kubernetes.io/projected/26e1b944-92fc-431c-8345-7c464f156745-kube-api-access-q62ll\") pod \"calico-apiserver-88f858578-xkxvg\" (UID: \"26e1b944-92fc-431c-8345-7c464f156745\") " pod="calico-system/calico-apiserver-88f858578-xkxvg" Mar 12 01:39:16.031042 kubelet[2511]: I0312 01:39:16.030778 2511 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cvzrh\" (UniqueName: \"kubernetes.io/projected/6337dcae-e8ff-47d2-900a-5c71524380d4-kube-api-access-cvzrh\") pod \"goldmane-9f7667bb8-xzc4r\" (UID: \"6337dcae-e8ff-47d2-900a-5c71524380d4\") " pod="calico-system/goldmane-9f7667bb8-xzc4r" Mar 12 01:39:16.031150 kubelet[2511]: I0312 01:39:16.030792 2511 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/26e1b944-92fc-431c-8345-7c464f156745-calico-apiserver-certs\") pod \"calico-apiserver-88f858578-xkxvg\" (UID: \"26e1b944-92fc-431c-8345-7c464f156745\") " pod="calico-system/calico-apiserver-88f858578-xkxvg" Mar 12 01:39:16.031150 kubelet[2511]: I0312 01:39:16.030809 2511 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6ac744d1-2bf6-4481-882f-f786a1600883-config-volume\") pod \"coredns-7d764666f9-pjnrc\" (UID: \"6ac744d1-2bf6-4481-882f-f786a1600883\") " pod="kube-system/coredns-7d764666f9-pjnrc" Mar 12 01:39:16.031150 kubelet[2511]: I0312 01:39:16.030823 2511 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/59eed31b-e704-4270-b07c-3c68fa6fc47c-tigera-ca-bundle\") pod \"calico-kube-controllers-fcfbfb698-xmdzd\" (UID: \"59eed31b-e704-4270-b07c-3c68fa6fc47c\") " pod="calico-system/calico-kube-controllers-fcfbfb698-xmdzd" Mar 12 01:39:16.031150 kubelet[2511]: I0312 01:39:16.030837 2511 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-59k4d\" (UniqueName: \"kubernetes.io/projected/6ac744d1-2bf6-4481-882f-f786a1600883-kube-api-access-59k4d\") pod \"coredns-7d764666f9-pjnrc\" (UID: \"6ac744d1-2bf6-4481-882f-f786a1600883\") " pod="kube-system/coredns-7d764666f9-pjnrc" Mar 12 01:39:16.031150 kubelet[2511]: I0312 01:39:16.030852 2511 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5pjxz\" (UniqueName: \"kubernetes.io/projected/59eed31b-e704-4270-b07c-3c68fa6fc47c-kube-api-access-5pjxz\") pod \"calico-kube-controllers-fcfbfb698-xmdzd\" (UID: \"59eed31b-e704-4270-b07c-3c68fa6fc47c\") " pod="calico-system/calico-kube-controllers-fcfbfb698-xmdzd" Mar 12 01:39:16.031257 kubelet[2511]: I0312 01:39:16.030896 2511 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/ce2d517a-d1f9-487b-8064-201ed4846645-calico-apiserver-certs\") pod \"calico-apiserver-88f858578-8cj8d\" (UID: \"ce2d517a-d1f9-487b-8064-201ed4846645\") " pod="calico-system/calico-apiserver-88f858578-8cj8d" Mar 12 01:39:16.031257 kubelet[2511]: I0312 01:39:16.030924 2511 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/1d5c8986-3e41-46b9-9e79-5445416fc70d-nginx-config\") pod \"whisker-866854587-hn6q9\" (UID: \"1d5c8986-3e41-46b9-9e79-5445416fc70d\") " pod="calico-system/whisker-866854587-hn6q9" Mar 12 01:39:16.031257 kubelet[2511]: I0312 01:39:16.030953 2511 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l289q\" (UniqueName: \"kubernetes.io/projected/ce2d517a-d1f9-487b-8064-201ed4846645-kube-api-access-l289q\") pod \"calico-apiserver-88f858578-8cj8d\" (UID: \"ce2d517a-d1f9-487b-8064-201ed4846645\") " pod="calico-system/calico-apiserver-88f858578-8cj8d" Mar 12 01:39:16.031257 kubelet[2511]: I0312 01:39:16.030967 2511 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1d5c8986-3e41-46b9-9e79-5445416fc70d-whisker-ca-bundle\") pod \"whisker-866854587-hn6q9\" (UID: \"1d5c8986-3e41-46b9-9e79-5445416fc70d\") " pod="calico-system/whisker-866854587-hn6q9" Mar 12 01:39:16.031257 kubelet[2511]: I0312 01:39:16.030980 2511 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6337dcae-e8ff-47d2-900a-5c71524380d4-goldmane-ca-bundle\") pod \"goldmane-9f7667bb8-xzc4r\" (UID: \"6337dcae-e8ff-47d2-900a-5c71524380d4\") " pod="calico-system/goldmane-9f7667bb8-xzc4r" Mar 12 01:39:16.091293 systemd[1]: Created slice kubepods-besteffort-pod4350a8ed_9db0_4145_8365_af9918373d13.slice - libcontainer container kubepods-besteffort-pod4350a8ed_9db0_4145_8365_af9918373d13.slice. Mar 12 01:39:16.099083 containerd[1457]: time="2026-03-12T01:39:16.098636635Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-r96sx,Uid:4350a8ed-9db0-4145-8365-af9918373d13,Namespace:calico-system,Attempt:0,}" Mar 12 01:39:16.237613 containerd[1457]: time="2026-03-12T01:39:16.237472959Z" level=error msg="Failed to destroy network for sandbox \"67d8897535a6b9de6210033cde4f03da90a1e25ba17b1d8bbfa0e0aca9811ba2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 01:39:16.238554 containerd[1457]: time="2026-03-12T01:39:16.237975808Z" level=error msg="encountered an error cleaning up failed sandbox \"67d8897535a6b9de6210033cde4f03da90a1e25ba17b1d8bbfa0e0aca9811ba2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 01:39:16.238554 containerd[1457]: time="2026-03-12T01:39:16.238024895Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-r96sx,Uid:4350a8ed-9db0-4145-8365-af9918373d13,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"67d8897535a6b9de6210033cde4f03da90a1e25ba17b1d8bbfa0e0aca9811ba2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 01:39:16.247913 kubelet[2511]: E0312 01:39:16.247803 2511 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"67d8897535a6b9de6210033cde4f03da90a1e25ba17b1d8bbfa0e0aca9811ba2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 01:39:16.247913 kubelet[2511]: E0312 01:39:16.247905 2511 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"67d8897535a6b9de6210033cde4f03da90a1e25ba17b1d8bbfa0e0aca9811ba2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-r96sx" Mar 12 01:39:16.248159 kubelet[2511]: E0312 01:39:16.247923 2511 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"67d8897535a6b9de6210033cde4f03da90a1e25ba17b1d8bbfa0e0aca9811ba2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-r96sx" Mar 12 01:39:16.248159 kubelet[2511]: E0312 01:39:16.247986 2511 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-r96sx_calico-system(4350a8ed-9db0-4145-8365-af9918373d13)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-r96sx_calico-system(4350a8ed-9db0-4145-8365-af9918373d13)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"67d8897535a6b9de6210033cde4f03da90a1e25ba17b1d8bbfa0e0aca9811ba2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-r96sx" podUID="4350a8ed-9db0-4145-8365-af9918373d13" Mar 12 01:39:16.252633 containerd[1457]: time="2026-03-12T01:39:16.252114637Z" level=info msg="CreateContainer within sandbox \"fd1eb73d0bec7b9da205b3c02b3c1c7a2a1bd842b3fc942f13af93617d9d971a\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Mar 12 01:39:16.257372 containerd[1457]: time="2026-03-12T01:39:16.257306246Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-88f858578-8cj8d,Uid:ce2d517a-d1f9-487b-8064-201ed4846645,Namespace:calico-system,Attempt:0,}" Mar 12 01:39:16.264783 kubelet[2511]: E0312 01:39:16.264696 2511 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:39:16.265285 containerd[1457]: time="2026-03-12T01:39:16.265232183Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-tp6j8,Uid:0da94b48-bfde-434e-b905-92be304e2a09,Namespace:kube-system,Attempt:0,}" Mar 12 01:39:16.276022 kubelet[2511]: E0312 01:39:16.275993 2511 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:39:16.276784 containerd[1457]: time="2026-03-12T01:39:16.276739162Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-pjnrc,Uid:6ac744d1-2bf6-4481-882f-f786a1600883,Namespace:kube-system,Attempt:0,}" Mar 12 01:39:16.279974 containerd[1457]: time="2026-03-12T01:39:16.279932080Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-866854587-hn6q9,Uid:1d5c8986-3e41-46b9-9e79-5445416fc70d,Namespace:calico-system,Attempt:0,}" Mar 12 01:39:16.288313 containerd[1457]: time="2026-03-12T01:39:16.288238310Z" level=info msg="CreateContainer within sandbox \"fd1eb73d0bec7b9da205b3c02b3c1c7a2a1bd842b3fc942f13af93617d9d971a\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"9b8c13b5ec9cc7dfd8748b7eef2df937dba6763969b664137ed3b45273251769\"" Mar 12 01:39:16.289215 containerd[1457]: time="2026-03-12T01:39:16.289169972Z" level=info msg="StartContainer for \"9b8c13b5ec9cc7dfd8748b7eef2df937dba6763969b664137ed3b45273251769\"" Mar 12 01:39:16.292814 containerd[1457]: time="2026-03-12T01:39:16.292711463Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-9f7667bb8-xzc4r,Uid:6337dcae-e8ff-47d2-900a-5c71524380d4,Namespace:calico-system,Attempt:0,}" Mar 12 01:39:16.299809 containerd[1457]: time="2026-03-12T01:39:16.299765523Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-fcfbfb698-xmdzd,Uid:59eed31b-e704-4270-b07c-3c68fa6fc47c,Namespace:calico-system,Attempt:0,}" Mar 12 01:39:16.316069 containerd[1457]: time="2026-03-12T01:39:16.316012861Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-88f858578-xkxvg,Uid:26e1b944-92fc-431c-8345-7c464f156745,Namespace:calico-system,Attempt:0,}" Mar 12 01:39:16.358881 systemd[1]: Started cri-containerd-9b8c13b5ec9cc7dfd8748b7eef2df937dba6763969b664137ed3b45273251769.scope - libcontainer container 9b8c13b5ec9cc7dfd8748b7eef2df937dba6763969b664137ed3b45273251769. Mar 12 01:39:16.399878 containerd[1457]: time="2026-03-12T01:39:16.399634798Z" level=error msg="Failed to destroy network for sandbox \"e08fb2d5984d3f19acd459d4693d92f4c95fdfe6f61e1cbe65460431321bda4f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 01:39:16.406270 containerd[1457]: time="2026-03-12T01:39:16.406239951Z" level=error msg="encountered an error cleaning up failed sandbox \"e08fb2d5984d3f19acd459d4693d92f4c95fdfe6f61e1cbe65460431321bda4f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 01:39:16.406510 containerd[1457]: time="2026-03-12T01:39:16.406424205Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-88f858578-8cj8d,Uid:ce2d517a-d1f9-487b-8064-201ed4846645,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e08fb2d5984d3f19acd459d4693d92f4c95fdfe6f61e1cbe65460431321bda4f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 01:39:16.408722 kubelet[2511]: E0312 01:39:16.407032 2511 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e08fb2d5984d3f19acd459d4693d92f4c95fdfe6f61e1cbe65460431321bda4f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 01:39:16.408722 kubelet[2511]: E0312 01:39:16.407090 2511 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e08fb2d5984d3f19acd459d4693d92f4c95fdfe6f61e1cbe65460431321bda4f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-88f858578-8cj8d" Mar 12 01:39:16.408722 kubelet[2511]: E0312 01:39:16.407121 2511 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e08fb2d5984d3f19acd459d4693d92f4c95fdfe6f61e1cbe65460431321bda4f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-88f858578-8cj8d" Mar 12 01:39:16.408884 kubelet[2511]: E0312 01:39:16.407174 2511 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-88f858578-8cj8d_calico-system(ce2d517a-d1f9-487b-8064-201ed4846645)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-88f858578-8cj8d_calico-system(ce2d517a-d1f9-487b-8064-201ed4846645)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e08fb2d5984d3f19acd459d4693d92f4c95fdfe6f61e1cbe65460431321bda4f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-88f858578-8cj8d" podUID="ce2d517a-d1f9-487b-8064-201ed4846645" Mar 12 01:39:16.449756 containerd[1457]: time="2026-03-12T01:39:16.449182920Z" level=info msg="StartContainer for \"9b8c13b5ec9cc7dfd8748b7eef2df937dba6763969b664137ed3b45273251769\" returns successfully" Mar 12 01:39:16.511768 containerd[1457]: time="2026-03-12T01:39:16.511627515Z" level=error msg="Failed to destroy network for sandbox \"e43b20a03bd0245acc02c35de3db4bfb0c1c52493ecc2aaebe98371206d70720\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 01:39:16.512453 containerd[1457]: time="2026-03-12T01:39:16.512409722Z" level=error msg="encountered an error cleaning up failed sandbox \"e43b20a03bd0245acc02c35de3db4bfb0c1c52493ecc2aaebe98371206d70720\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 01:39:16.512553 containerd[1457]: time="2026-03-12T01:39:16.512462046Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-866854587-hn6q9,Uid:1d5c8986-3e41-46b9-9e79-5445416fc70d,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e43b20a03bd0245acc02c35de3db4bfb0c1c52493ecc2aaebe98371206d70720\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 01:39:16.512994 kubelet[2511]: E0312 01:39:16.512930 2511 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e43b20a03bd0245acc02c35de3db4bfb0c1c52493ecc2aaebe98371206d70720\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 01:39:16.513086 kubelet[2511]: E0312 01:39:16.513021 2511 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e43b20a03bd0245acc02c35de3db4bfb0c1c52493ecc2aaebe98371206d70720\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-866854587-hn6q9" Mar 12 01:39:16.513086 kubelet[2511]: E0312 01:39:16.513046 2511 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e43b20a03bd0245acc02c35de3db4bfb0c1c52493ecc2aaebe98371206d70720\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-866854587-hn6q9" Mar 12 01:39:16.513136 kubelet[2511]: E0312 01:39:16.513095 2511 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-866854587-hn6q9_calico-system(1d5c8986-3e41-46b9-9e79-5445416fc70d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-866854587-hn6q9_calico-system(1d5c8986-3e41-46b9-9e79-5445416fc70d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e43b20a03bd0245acc02c35de3db4bfb0c1c52493ecc2aaebe98371206d70720\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-866854587-hn6q9" podUID="1d5c8986-3e41-46b9-9e79-5445416fc70d" Mar 12 01:39:16.549762 containerd[1457]: time="2026-03-12T01:39:16.548599318Z" level=error msg="Failed to destroy network for sandbox \"e1edb32b2c7462e567eb016e1aaca07805584bed8d72974a30b04f0d07a2264d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 01:39:16.549762 containerd[1457]: time="2026-03-12T01:39:16.548952781Z" level=error msg="Failed to destroy network for sandbox \"16638a7ae378e11208af9525a3d3bd6edcf99adfb1494b960f2f91c0193672f3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 01:39:16.549762 containerd[1457]: time="2026-03-12T01:39:16.549295097Z" level=error msg="encountered an error cleaning up failed sandbox \"e1edb32b2c7462e567eb016e1aaca07805584bed8d72974a30b04f0d07a2264d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 01:39:16.549762 containerd[1457]: time="2026-03-12T01:39:16.549344824Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-9f7667bb8-xzc4r,Uid:6337dcae-e8ff-47d2-900a-5c71524380d4,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e1edb32b2c7462e567eb016e1aaca07805584bed8d72974a30b04f0d07a2264d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 01:39:16.552871 containerd[1457]: time="2026-03-12T01:39:16.549845114Z" level=error msg="encountered an error cleaning up failed sandbox \"16638a7ae378e11208af9525a3d3bd6edcf99adfb1494b960f2f91c0193672f3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 01:39:16.552871 containerd[1457]: time="2026-03-12T01:39:16.549912357Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-88f858578-xkxvg,Uid:26e1b944-92fc-431c-8345-7c464f156745,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"16638a7ae378e11208af9525a3d3bd6edcf99adfb1494b960f2f91c0193672f3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 01:39:16.553027 kubelet[2511]: E0312 01:39:16.549799 2511 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e1edb32b2c7462e567eb016e1aaca07805584bed8d72974a30b04f0d07a2264d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 01:39:16.553027 kubelet[2511]: E0312 01:39:16.549855 2511 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e1edb32b2c7462e567eb016e1aaca07805584bed8d72974a30b04f0d07a2264d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-9f7667bb8-xzc4r" Mar 12 01:39:16.553027 kubelet[2511]: E0312 01:39:16.549872 2511 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e1edb32b2c7462e567eb016e1aaca07805584bed8d72974a30b04f0d07a2264d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-9f7667bb8-xzc4r" Mar 12 01:39:16.553027 kubelet[2511]: E0312 01:39:16.550858 2511 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"16638a7ae378e11208af9525a3d3bd6edcf99adfb1494b960f2f91c0193672f3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 01:39:16.555713 kubelet[2511]: E0312 01:39:16.550901 2511 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"16638a7ae378e11208af9525a3d3bd6edcf99adfb1494b960f2f91c0193672f3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-88f858578-xkxvg" Mar 12 01:39:16.555713 kubelet[2511]: E0312 01:39:16.550923 2511 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"16638a7ae378e11208af9525a3d3bd6edcf99adfb1494b960f2f91c0193672f3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-88f858578-xkxvg" Mar 12 01:39:16.555713 kubelet[2511]: E0312 01:39:16.550970 2511 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-88f858578-xkxvg_calico-system(26e1b944-92fc-431c-8345-7c464f156745)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-88f858578-xkxvg_calico-system(26e1b944-92fc-431c-8345-7c464f156745)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"16638a7ae378e11208af9525a3d3bd6edcf99adfb1494b960f2f91c0193672f3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-88f858578-xkxvg" podUID="26e1b944-92fc-431c-8345-7c464f156745" Mar 12 01:39:16.556028 kubelet[2511]: E0312 01:39:16.552252 2511 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-9f7667bb8-xzc4r_calico-system(6337dcae-e8ff-47d2-900a-5c71524380d4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-9f7667bb8-xzc4r_calico-system(6337dcae-e8ff-47d2-900a-5c71524380d4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e1edb32b2c7462e567eb016e1aaca07805584bed8d72974a30b04f0d07a2264d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-9f7667bb8-xzc4r" podUID="6337dcae-e8ff-47d2-900a-5c71524380d4" Mar 12 01:39:16.561096 containerd[1457]: time="2026-03-12T01:39:16.561045550Z" level=error msg="Failed to destroy network for sandbox \"b120445665233ef642fc7999c6ee5c8b2bc022af3ce8da583c99e82236f2e4e4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 01:39:16.562265 containerd[1457]: time="2026-03-12T01:39:16.561865282Z" level=error msg="encountered an error cleaning up failed sandbox \"b120445665233ef642fc7999c6ee5c8b2bc022af3ce8da583c99e82236f2e4e4\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 01:39:16.562265 containerd[1457]: time="2026-03-12T01:39:16.561936361Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-pjnrc,Uid:6ac744d1-2bf6-4481-882f-f786a1600883,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b120445665233ef642fc7999c6ee5c8b2bc022af3ce8da583c99e82236f2e4e4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 01:39:16.563083 kubelet[2511]: E0312 01:39:16.562323 2511 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b120445665233ef642fc7999c6ee5c8b2bc022af3ce8da583c99e82236f2e4e4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 01:39:16.563083 kubelet[2511]: E0312 01:39:16.562406 2511 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b120445665233ef642fc7999c6ee5c8b2bc022af3ce8da583c99e82236f2e4e4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7d764666f9-pjnrc" Mar 12 01:39:16.563083 kubelet[2511]: E0312 01:39:16.562422 2511 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b120445665233ef642fc7999c6ee5c8b2bc022af3ce8da583c99e82236f2e4e4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7d764666f9-pjnrc" Mar 12 01:39:16.563354 kubelet[2511]: E0312 01:39:16.562530 2511 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7d764666f9-pjnrc_kube-system(6ac744d1-2bf6-4481-882f-f786a1600883)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7d764666f9-pjnrc_kube-system(6ac744d1-2bf6-4481-882f-f786a1600883)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b120445665233ef642fc7999c6ee5c8b2bc022af3ce8da583c99e82236f2e4e4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7d764666f9-pjnrc" podUID="6ac744d1-2bf6-4481-882f-f786a1600883" Mar 12 01:39:16.568754 containerd[1457]: time="2026-03-12T01:39:16.568610227Z" level=error msg="Failed to destroy network for sandbox \"dc5f368c1ed70f47cc9a8ac54966e32621eb9a4de80e4e7c05b4ebc20dcdcef2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 01:39:16.569423 containerd[1457]: time="2026-03-12T01:39:16.569267148Z" level=error msg="encountered an error cleaning up failed sandbox \"dc5f368c1ed70f47cc9a8ac54966e32621eb9a4de80e4e7c05b4ebc20dcdcef2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 01:39:16.569423 containerd[1457]: time="2026-03-12T01:39:16.569341695Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-fcfbfb698-xmdzd,Uid:59eed31b-e704-4270-b07c-3c68fa6fc47c,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"dc5f368c1ed70f47cc9a8ac54966e32621eb9a4de80e4e7c05b4ebc20dcdcef2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 01:39:16.570095 kubelet[2511]: E0312 01:39:16.570001 2511 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dc5f368c1ed70f47cc9a8ac54966e32621eb9a4de80e4e7c05b4ebc20dcdcef2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 01:39:16.570095 kubelet[2511]: E0312 01:39:16.570080 2511 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dc5f368c1ed70f47cc9a8ac54966e32621eb9a4de80e4e7c05b4ebc20dcdcef2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-fcfbfb698-xmdzd" Mar 12 01:39:16.570186 kubelet[2511]: E0312 01:39:16.570098 2511 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dc5f368c1ed70f47cc9a8ac54966e32621eb9a4de80e4e7c05b4ebc20dcdcef2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-fcfbfb698-xmdzd" Mar 12 01:39:16.570186 kubelet[2511]: E0312 01:39:16.570156 2511 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-fcfbfb698-xmdzd_calico-system(59eed31b-e704-4270-b07c-3c68fa6fc47c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-fcfbfb698-xmdzd_calico-system(59eed31b-e704-4270-b07c-3c68fa6fc47c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"dc5f368c1ed70f47cc9a8ac54966e32621eb9a4de80e4e7c05b4ebc20dcdcef2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-fcfbfb698-xmdzd" podUID="59eed31b-e704-4270-b07c-3c68fa6fc47c" Mar 12 01:39:16.575737 containerd[1457]: time="2026-03-12T01:39:16.575577985Z" level=error msg="Failed to destroy network for sandbox \"adf0a5e470569e096efcf77d87ba399ee0964f16689d2cea6bec88bb9ecda625\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 01:39:16.576350 containerd[1457]: time="2026-03-12T01:39:16.576190959Z" level=error msg="encountered an error cleaning up failed sandbox \"adf0a5e470569e096efcf77d87ba399ee0964f16689d2cea6bec88bb9ecda625\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 01:39:16.576775 containerd[1457]: time="2026-03-12T01:39:16.576743746Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-tp6j8,Uid:0da94b48-bfde-434e-b905-92be304e2a09,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"adf0a5e470569e096efcf77d87ba399ee0964f16689d2cea6bec88bb9ecda625\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 01:39:16.577299 kubelet[2511]: E0312 01:39:16.577247 2511 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"adf0a5e470569e096efcf77d87ba399ee0964f16689d2cea6bec88bb9ecda625\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 01:39:16.577429 kubelet[2511]: E0312 01:39:16.577328 2511 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"adf0a5e470569e096efcf77d87ba399ee0964f16689d2cea6bec88bb9ecda625\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7d764666f9-tp6j8" Mar 12 01:39:16.577429 kubelet[2511]: E0312 01:39:16.577354 2511 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"adf0a5e470569e096efcf77d87ba399ee0964f16689d2cea6bec88bb9ecda625\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7d764666f9-tp6j8" Mar 12 01:39:16.577513 kubelet[2511]: E0312 01:39:16.577456 2511 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7d764666f9-tp6j8_kube-system(0da94b48-bfde-434e-b905-92be304e2a09)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7d764666f9-tp6j8_kube-system(0da94b48-bfde-434e-b905-92be304e2a09)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"adf0a5e470569e096efcf77d87ba399ee0964f16689d2cea6bec88bb9ecda625\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7d764666f9-tp6j8" podUID="0da94b48-bfde-434e-b905-92be304e2a09" Mar 12 01:39:17.122936 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-67d8897535a6b9de6210033cde4f03da90a1e25ba17b1d8bbfa0e0aca9811ba2-shm.mount: Deactivated successfully. Mar 12 01:39:17.238759 kubelet[2511]: I0312 01:39:17.238636 2511 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="16638a7ae378e11208af9525a3d3bd6edcf99adfb1494b960f2f91c0193672f3" Mar 12 01:39:17.240257 kubelet[2511]: I0312 01:39:17.240197 2511 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dc5f368c1ed70f47cc9a8ac54966e32621eb9a4de80e4e7c05b4ebc20dcdcef2" Mar 12 01:39:17.243152 kubelet[2511]: I0312 01:39:17.243048 2511 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e1edb32b2c7462e567eb016e1aaca07805584bed8d72974a30b04f0d07a2264d" Mar 12 01:39:17.249581 containerd[1457]: time="2026-03-12T01:39:17.249312452Z" level=info msg="StopPodSandbox for \"16638a7ae378e11208af9525a3d3bd6edcf99adfb1494b960f2f91c0193672f3\"" Mar 12 01:39:17.250125 containerd[1457]: time="2026-03-12T01:39:17.250004210Z" level=info msg="StopPodSandbox for \"dc5f368c1ed70f47cc9a8ac54966e32621eb9a4de80e4e7c05b4ebc20dcdcef2\"" Mar 12 01:39:17.250978 containerd[1457]: time="2026-03-12T01:39:17.250958262Z" level=info msg="Ensure that sandbox dc5f368c1ed70f47cc9a8ac54966e32621eb9a4de80e4e7c05b4ebc20dcdcef2 in task-service has been cleanup successfully" Mar 12 01:39:17.251505 containerd[1457]: time="2026-03-12T01:39:17.251283741Z" level=info msg="Ensure that sandbox 16638a7ae378e11208af9525a3d3bd6edcf99adfb1494b960f2f91c0193672f3 in task-service has been cleanup successfully" Mar 12 01:39:17.252004 containerd[1457]: time="2026-03-12T01:39:17.251927610Z" level=info msg="StopPodSandbox for \"e1edb32b2c7462e567eb016e1aaca07805584bed8d72974a30b04f0d07a2264d\"" Mar 12 01:39:17.252399 containerd[1457]: time="2026-03-12T01:39:17.252366392Z" level=info msg="Ensure that sandbox e1edb32b2c7462e567eb016e1aaca07805584bed8d72974a30b04f0d07a2264d in task-service has been cleanup successfully" Mar 12 01:39:17.254343 kubelet[2511]: I0312 01:39:17.254307 2511 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b120445665233ef642fc7999c6ee5c8b2bc022af3ce8da583c99e82236f2e4e4" Mar 12 01:39:17.255935 containerd[1457]: time="2026-03-12T01:39:17.255864012Z" level=info msg="StopPodSandbox for \"b120445665233ef642fc7999c6ee5c8b2bc022af3ce8da583c99e82236f2e4e4\"" Mar 12 01:39:17.256217 containerd[1457]: time="2026-03-12T01:39:17.256186645Z" level=info msg="Ensure that sandbox b120445665233ef642fc7999c6ee5c8b2bc022af3ce8da583c99e82236f2e4e4 in task-service has been cleanup successfully" Mar 12 01:39:17.260770 kubelet[2511]: I0312 01:39:17.260622 2511 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="67d8897535a6b9de6210033cde4f03da90a1e25ba17b1d8bbfa0e0aca9811ba2" Mar 12 01:39:17.262949 containerd[1457]: time="2026-03-12T01:39:17.262873938Z" level=info msg="StopPodSandbox for \"67d8897535a6b9de6210033cde4f03da90a1e25ba17b1d8bbfa0e0aca9811ba2\"" Mar 12 01:39:17.263176 containerd[1457]: time="2026-03-12T01:39:17.263107246Z" level=info msg="Ensure that sandbox 67d8897535a6b9de6210033cde4f03da90a1e25ba17b1d8bbfa0e0aca9811ba2 in task-service has been cleanup successfully" Mar 12 01:39:17.267296 kubelet[2511]: I0312 01:39:17.267235 2511 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e43b20a03bd0245acc02c35de3db4bfb0c1c52493ecc2aaebe98371206d70720" Mar 12 01:39:17.267843 containerd[1457]: time="2026-03-12T01:39:17.267623110Z" level=info msg="StopPodSandbox for \"e43b20a03bd0245acc02c35de3db4bfb0c1c52493ecc2aaebe98371206d70720\"" Mar 12 01:39:17.268555 containerd[1457]: time="2026-03-12T01:39:17.268370897Z" level=info msg="Ensure that sandbox e43b20a03bd0245acc02c35de3db4bfb0c1c52493ecc2aaebe98371206d70720 in task-service has been cleanup successfully" Mar 12 01:39:17.273581 kubelet[2511]: I0312 01:39:17.272764 2511 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="adf0a5e470569e096efcf77d87ba399ee0964f16689d2cea6bec88bb9ecda625" Mar 12 01:39:17.273899 containerd[1457]: time="2026-03-12T01:39:17.273838047Z" level=info msg="StopPodSandbox for \"adf0a5e470569e096efcf77d87ba399ee0964f16689d2cea6bec88bb9ecda625\"" Mar 12 01:39:17.274080 containerd[1457]: time="2026-03-12T01:39:17.273993192Z" level=info msg="Ensure that sandbox adf0a5e470569e096efcf77d87ba399ee0964f16689d2cea6bec88bb9ecda625 in task-service has been cleanup successfully" Mar 12 01:39:17.277799 kubelet[2511]: I0312 01:39:17.277563 2511 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e08fb2d5984d3f19acd459d4693d92f4c95fdfe6f61e1cbe65460431321bda4f" Mar 12 01:39:17.279361 kubelet[2511]: I0312 01:39:17.279092 2511 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-node-h84tt" podStartSLOduration=2.341004864 podStartE2EDuration="16.279084082s" podCreationTimestamp="2026-03-12 01:39:01 +0000 UTC" firstStartedPulling="2026-03-12 01:39:02.29890231 +0000 UTC m=+17.452297010" lastFinishedPulling="2026-03-12 01:39:16.236981539 +0000 UTC m=+31.390376228" observedRunningTime="2026-03-12 01:39:17.277315572 +0000 UTC m=+32.430710262" watchObservedRunningTime="2026-03-12 01:39:17.279084082 +0000 UTC m=+32.432478772" Mar 12 01:39:17.281855 containerd[1457]: time="2026-03-12T01:39:17.281771196Z" level=info msg="StopPodSandbox for \"e08fb2d5984d3f19acd459d4693d92f4c95fdfe6f61e1cbe65460431321bda4f\"" Mar 12 01:39:17.283323 containerd[1457]: time="2026-03-12T01:39:17.283301869Z" level=info msg="Ensure that sandbox e08fb2d5984d3f19acd459d4693d92f4c95fdfe6f61e1cbe65460431321bda4f in task-service has been cleanup successfully" Mar 12 01:39:17.568933 containerd[1457]: 2026-03-12 01:39:17.437 [INFO][3714] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="e1edb32b2c7462e567eb016e1aaca07805584bed8d72974a30b04f0d07a2264d" Mar 12 01:39:17.568933 containerd[1457]: 2026-03-12 01:39:17.437 [INFO][3714] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="e1edb32b2c7462e567eb016e1aaca07805584bed8d72974a30b04f0d07a2264d" iface="eth0" netns="/var/run/netns/cni-f2d28065-4000-6812-fb8f-5cef6da1279c" Mar 12 01:39:17.568933 containerd[1457]: 2026-03-12 01:39:17.438 [INFO][3714] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="e1edb32b2c7462e567eb016e1aaca07805584bed8d72974a30b04f0d07a2264d" iface="eth0" netns="/var/run/netns/cni-f2d28065-4000-6812-fb8f-5cef6da1279c" Mar 12 01:39:17.568933 containerd[1457]: 2026-03-12 01:39:17.438 [INFO][3714] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="e1edb32b2c7462e567eb016e1aaca07805584bed8d72974a30b04f0d07a2264d" iface="eth0" netns="/var/run/netns/cni-f2d28065-4000-6812-fb8f-5cef6da1279c" Mar 12 01:39:17.568933 containerd[1457]: 2026-03-12 01:39:17.438 [INFO][3714] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="e1edb32b2c7462e567eb016e1aaca07805584bed8d72974a30b04f0d07a2264d" Mar 12 01:39:17.568933 containerd[1457]: 2026-03-12 01:39:17.439 [INFO][3714] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="e1edb32b2c7462e567eb016e1aaca07805584bed8d72974a30b04f0d07a2264d" Mar 12 01:39:17.568933 containerd[1457]: 2026-03-12 01:39:17.534 [INFO][3847] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="e1edb32b2c7462e567eb016e1aaca07805584bed8d72974a30b04f0d07a2264d" HandleID="k8s-pod-network.e1edb32b2c7462e567eb016e1aaca07805584bed8d72974a30b04f0d07a2264d" Workload="localhost-k8s-goldmane--9f7667bb8--xzc4r-eth0" Mar 12 01:39:17.568933 containerd[1457]: 2026-03-12 01:39:17.534 [INFO][3847] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 01:39:17.568933 containerd[1457]: 2026-03-12 01:39:17.534 [INFO][3847] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 01:39:17.568933 containerd[1457]: 2026-03-12 01:39:17.548 [WARNING][3847] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="e1edb32b2c7462e567eb016e1aaca07805584bed8d72974a30b04f0d07a2264d" HandleID="k8s-pod-network.e1edb32b2c7462e567eb016e1aaca07805584bed8d72974a30b04f0d07a2264d" Workload="localhost-k8s-goldmane--9f7667bb8--xzc4r-eth0" Mar 12 01:39:17.568933 containerd[1457]: 2026-03-12 01:39:17.548 [INFO][3847] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="e1edb32b2c7462e567eb016e1aaca07805584bed8d72974a30b04f0d07a2264d" HandleID="k8s-pod-network.e1edb32b2c7462e567eb016e1aaca07805584bed8d72974a30b04f0d07a2264d" Workload="localhost-k8s-goldmane--9f7667bb8--xzc4r-eth0" Mar 12 01:39:17.568933 containerd[1457]: 2026-03-12 01:39:17.551 [INFO][3847] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 01:39:17.568933 containerd[1457]: 2026-03-12 01:39:17.562 [INFO][3714] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="e1edb32b2c7462e567eb016e1aaca07805584bed8d72974a30b04f0d07a2264d" Mar 12 01:39:17.571505 systemd[1]: run-netns-cni\x2df2d28065\x2d4000\x2d6812\x2dfb8f\x2d5cef6da1279c.mount: Deactivated successfully. Mar 12 01:39:17.571946 containerd[1457]: time="2026-03-12T01:39:17.571877956Z" level=info msg="TearDown network for sandbox \"e1edb32b2c7462e567eb016e1aaca07805584bed8d72974a30b04f0d07a2264d\" successfully" Mar 12 01:39:17.571946 containerd[1457]: time="2026-03-12T01:39:17.571908156Z" level=info msg="StopPodSandbox for \"e1edb32b2c7462e567eb016e1aaca07805584bed8d72974a30b04f0d07a2264d\" returns successfully" Mar 12 01:39:17.579041 containerd[1457]: time="2026-03-12T01:39:17.578965749Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-9f7667bb8-xzc4r,Uid:6337dcae-e8ff-47d2-900a-5c71524380d4,Namespace:calico-system,Attempt:1,}" Mar 12 01:39:17.582222 containerd[1457]: 2026-03-12 01:39:17.433 [INFO][3716] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="16638a7ae378e11208af9525a3d3bd6edcf99adfb1494b960f2f91c0193672f3" Mar 12 01:39:17.582222 containerd[1457]: 2026-03-12 01:39:17.434 [INFO][3716] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="16638a7ae378e11208af9525a3d3bd6edcf99adfb1494b960f2f91c0193672f3" iface="eth0" netns="/var/run/netns/cni-9a2e10ad-2af9-1301-91e9-f79edf9c0a9d" Mar 12 01:39:17.582222 containerd[1457]: 2026-03-12 01:39:17.434 [INFO][3716] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="16638a7ae378e11208af9525a3d3bd6edcf99adfb1494b960f2f91c0193672f3" iface="eth0" netns="/var/run/netns/cni-9a2e10ad-2af9-1301-91e9-f79edf9c0a9d" Mar 12 01:39:17.582222 containerd[1457]: 2026-03-12 01:39:17.434 [INFO][3716] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="16638a7ae378e11208af9525a3d3bd6edcf99adfb1494b960f2f91c0193672f3" iface="eth0" netns="/var/run/netns/cni-9a2e10ad-2af9-1301-91e9-f79edf9c0a9d" Mar 12 01:39:17.582222 containerd[1457]: 2026-03-12 01:39:17.434 [INFO][3716] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="16638a7ae378e11208af9525a3d3bd6edcf99adfb1494b960f2f91c0193672f3" Mar 12 01:39:17.582222 containerd[1457]: 2026-03-12 01:39:17.434 [INFO][3716] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="16638a7ae378e11208af9525a3d3bd6edcf99adfb1494b960f2f91c0193672f3" Mar 12 01:39:17.582222 containerd[1457]: 2026-03-12 01:39:17.535 [INFO][3845] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="16638a7ae378e11208af9525a3d3bd6edcf99adfb1494b960f2f91c0193672f3" HandleID="k8s-pod-network.16638a7ae378e11208af9525a3d3bd6edcf99adfb1494b960f2f91c0193672f3" Workload="localhost-k8s-calico--apiserver--88f858578--xkxvg-eth0" Mar 12 01:39:17.582222 containerd[1457]: 2026-03-12 01:39:17.535 [INFO][3845] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 01:39:17.582222 containerd[1457]: 2026-03-12 01:39:17.553 [INFO][3845] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 01:39:17.582222 containerd[1457]: 2026-03-12 01:39:17.567 [WARNING][3845] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="16638a7ae378e11208af9525a3d3bd6edcf99adfb1494b960f2f91c0193672f3" HandleID="k8s-pod-network.16638a7ae378e11208af9525a3d3bd6edcf99adfb1494b960f2f91c0193672f3" Workload="localhost-k8s-calico--apiserver--88f858578--xkxvg-eth0" Mar 12 01:39:17.582222 containerd[1457]: 2026-03-12 01:39:17.567 [INFO][3845] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="16638a7ae378e11208af9525a3d3bd6edcf99adfb1494b960f2f91c0193672f3" HandleID="k8s-pod-network.16638a7ae378e11208af9525a3d3bd6edcf99adfb1494b960f2f91c0193672f3" Workload="localhost-k8s-calico--apiserver--88f858578--xkxvg-eth0" Mar 12 01:39:17.582222 containerd[1457]: 2026-03-12 01:39:17.574 [INFO][3845] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 01:39:17.582222 containerd[1457]: 2026-03-12 01:39:17.579 [INFO][3716] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="16638a7ae378e11208af9525a3d3bd6edcf99adfb1494b960f2f91c0193672f3" Mar 12 01:39:17.585242 systemd[1]: run-netns-cni\x2d9a2e10ad\x2d2af9\x2d1301\x2d91e9\x2df79edf9c0a9d.mount: Deactivated successfully. Mar 12 01:39:17.587180 containerd[1457]: time="2026-03-12T01:39:17.587130360Z" level=info msg="TearDown network for sandbox \"16638a7ae378e11208af9525a3d3bd6edcf99adfb1494b960f2f91c0193672f3\" successfully" Mar 12 01:39:17.587264 containerd[1457]: time="2026-03-12T01:39:17.587186180Z" level=info msg="StopPodSandbox for \"16638a7ae378e11208af9525a3d3bd6edcf99adfb1494b960f2f91c0193672f3\" returns successfully" Mar 12 01:39:17.595805 containerd[1457]: time="2026-03-12T01:39:17.595752008Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-88f858578-xkxvg,Uid:26e1b944-92fc-431c-8345-7c464f156745,Namespace:calico-system,Attempt:1,}" Mar 12 01:39:17.608223 containerd[1457]: 2026-03-12 01:39:17.430 [INFO][3799] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="adf0a5e470569e096efcf77d87ba399ee0964f16689d2cea6bec88bb9ecda625" Mar 12 01:39:17.608223 containerd[1457]: 2026-03-12 01:39:17.430 [INFO][3799] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="adf0a5e470569e096efcf77d87ba399ee0964f16689d2cea6bec88bb9ecda625" iface="eth0" netns="/var/run/netns/cni-7ecf6d4a-f3aa-7813-f759-8958d4107ec2" Mar 12 01:39:17.608223 containerd[1457]: 2026-03-12 01:39:17.430 [INFO][3799] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="adf0a5e470569e096efcf77d87ba399ee0964f16689d2cea6bec88bb9ecda625" iface="eth0" netns="/var/run/netns/cni-7ecf6d4a-f3aa-7813-f759-8958d4107ec2" Mar 12 01:39:17.608223 containerd[1457]: 2026-03-12 01:39:17.430 [INFO][3799] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="adf0a5e470569e096efcf77d87ba399ee0964f16689d2cea6bec88bb9ecda625" iface="eth0" netns="/var/run/netns/cni-7ecf6d4a-f3aa-7813-f759-8958d4107ec2" Mar 12 01:39:17.608223 containerd[1457]: 2026-03-12 01:39:17.430 [INFO][3799] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="adf0a5e470569e096efcf77d87ba399ee0964f16689d2cea6bec88bb9ecda625" Mar 12 01:39:17.608223 containerd[1457]: 2026-03-12 01:39:17.431 [INFO][3799] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="adf0a5e470569e096efcf77d87ba399ee0964f16689d2cea6bec88bb9ecda625" Mar 12 01:39:17.608223 containerd[1457]: 2026-03-12 01:39:17.557 [INFO][3840] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="adf0a5e470569e096efcf77d87ba399ee0964f16689d2cea6bec88bb9ecda625" HandleID="k8s-pod-network.adf0a5e470569e096efcf77d87ba399ee0964f16689d2cea6bec88bb9ecda625" Workload="localhost-k8s-coredns--7d764666f9--tp6j8-eth0" Mar 12 01:39:17.608223 containerd[1457]: 2026-03-12 01:39:17.558 [INFO][3840] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 01:39:17.608223 containerd[1457]: 2026-03-12 01:39:17.573 [INFO][3840] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 01:39:17.608223 containerd[1457]: 2026-03-12 01:39:17.582 [WARNING][3840] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="adf0a5e470569e096efcf77d87ba399ee0964f16689d2cea6bec88bb9ecda625" HandleID="k8s-pod-network.adf0a5e470569e096efcf77d87ba399ee0964f16689d2cea6bec88bb9ecda625" Workload="localhost-k8s-coredns--7d764666f9--tp6j8-eth0" Mar 12 01:39:17.608223 containerd[1457]: 2026-03-12 01:39:17.585 [INFO][3840] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="adf0a5e470569e096efcf77d87ba399ee0964f16689d2cea6bec88bb9ecda625" HandleID="k8s-pod-network.adf0a5e470569e096efcf77d87ba399ee0964f16689d2cea6bec88bb9ecda625" Workload="localhost-k8s-coredns--7d764666f9--tp6j8-eth0" Mar 12 01:39:17.608223 containerd[1457]: 2026-03-12 01:39:17.590 [INFO][3840] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 01:39:17.608223 containerd[1457]: 2026-03-12 01:39:17.600 [INFO][3799] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="adf0a5e470569e096efcf77d87ba399ee0964f16689d2cea6bec88bb9ecda625" Mar 12 01:39:17.609548 containerd[1457]: time="2026-03-12T01:39:17.609359145Z" level=info msg="TearDown network for sandbox \"adf0a5e470569e096efcf77d87ba399ee0964f16689d2cea6bec88bb9ecda625\" successfully" Mar 12 01:39:17.609548 containerd[1457]: time="2026-03-12T01:39:17.609395315Z" level=info msg="StopPodSandbox for \"adf0a5e470569e096efcf77d87ba399ee0964f16689d2cea6bec88bb9ecda625\" returns successfully" Mar 12 01:39:17.613549 containerd[1457]: 2026-03-12 01:39:17.475 [INFO][3777] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="e43b20a03bd0245acc02c35de3db4bfb0c1c52493ecc2aaebe98371206d70720" Mar 12 01:39:17.613549 containerd[1457]: 2026-03-12 01:39:17.475 [INFO][3777] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="e43b20a03bd0245acc02c35de3db4bfb0c1c52493ecc2aaebe98371206d70720" iface="eth0" netns="/var/run/netns/cni-ea898124-db1c-bc86-f069-fdfe9af32a69" Mar 12 01:39:17.613549 containerd[1457]: 2026-03-12 01:39:17.476 [INFO][3777] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="e43b20a03bd0245acc02c35de3db4bfb0c1c52493ecc2aaebe98371206d70720" iface="eth0" netns="/var/run/netns/cni-ea898124-db1c-bc86-f069-fdfe9af32a69" Mar 12 01:39:17.613549 containerd[1457]: 2026-03-12 01:39:17.476 [INFO][3777] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="e43b20a03bd0245acc02c35de3db4bfb0c1c52493ecc2aaebe98371206d70720" iface="eth0" netns="/var/run/netns/cni-ea898124-db1c-bc86-f069-fdfe9af32a69" Mar 12 01:39:17.613549 containerd[1457]: 2026-03-12 01:39:17.476 [INFO][3777] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="e43b20a03bd0245acc02c35de3db4bfb0c1c52493ecc2aaebe98371206d70720" Mar 12 01:39:17.613549 containerd[1457]: 2026-03-12 01:39:17.476 [INFO][3777] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="e43b20a03bd0245acc02c35de3db4bfb0c1c52493ecc2aaebe98371206d70720" Mar 12 01:39:17.613549 containerd[1457]: 2026-03-12 01:39:17.559 [INFO][3875] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="e43b20a03bd0245acc02c35de3db4bfb0c1c52493ecc2aaebe98371206d70720" HandleID="k8s-pod-network.e43b20a03bd0245acc02c35de3db4bfb0c1c52493ecc2aaebe98371206d70720" Workload="localhost-k8s-whisker--866854587--hn6q9-eth0" Mar 12 01:39:17.613549 containerd[1457]: 2026-03-12 01:39:17.560 [INFO][3875] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 01:39:17.613549 containerd[1457]: 2026-03-12 01:39:17.589 [INFO][3875] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 01:39:17.613549 containerd[1457]: 2026-03-12 01:39:17.599 [WARNING][3875] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="e43b20a03bd0245acc02c35de3db4bfb0c1c52493ecc2aaebe98371206d70720" HandleID="k8s-pod-network.e43b20a03bd0245acc02c35de3db4bfb0c1c52493ecc2aaebe98371206d70720" Workload="localhost-k8s-whisker--866854587--hn6q9-eth0" Mar 12 01:39:17.613549 containerd[1457]: 2026-03-12 01:39:17.599 [INFO][3875] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="e43b20a03bd0245acc02c35de3db4bfb0c1c52493ecc2aaebe98371206d70720" HandleID="k8s-pod-network.e43b20a03bd0245acc02c35de3db4bfb0c1c52493ecc2aaebe98371206d70720" Workload="localhost-k8s-whisker--866854587--hn6q9-eth0" Mar 12 01:39:17.613549 containerd[1457]: 2026-03-12 01:39:17.603 [INFO][3875] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 01:39:17.613549 containerd[1457]: 2026-03-12 01:39:17.608 [INFO][3777] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="e43b20a03bd0245acc02c35de3db4bfb0c1c52493ecc2aaebe98371206d70720" Mar 12 01:39:17.615085 containerd[1457]: time="2026-03-12T01:39:17.614936082Z" level=info msg="TearDown network for sandbox \"e43b20a03bd0245acc02c35de3db4bfb0c1c52493ecc2aaebe98371206d70720\" successfully" Mar 12 01:39:17.615085 containerd[1457]: time="2026-03-12T01:39:17.614985459Z" level=info msg="StopPodSandbox for \"e43b20a03bd0245acc02c35de3db4bfb0c1c52493ecc2aaebe98371206d70720\" returns successfully" Mar 12 01:39:17.615527 kubelet[2511]: E0312 01:39:17.615452 2511 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:39:17.616317 containerd[1457]: time="2026-03-12T01:39:17.616284488Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-tp6j8,Uid:0da94b48-bfde-434e-b905-92be304e2a09,Namespace:kube-system,Attempt:1,}" Mar 12 01:39:17.636805 containerd[1457]: 2026-03-12 01:39:17.446 [INFO][3771] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="dc5f368c1ed70f47cc9a8ac54966e32621eb9a4de80e4e7c05b4ebc20dcdcef2" Mar 12 01:39:17.636805 containerd[1457]: 2026-03-12 01:39:17.446 [INFO][3771] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="dc5f368c1ed70f47cc9a8ac54966e32621eb9a4de80e4e7c05b4ebc20dcdcef2" iface="eth0" netns="/var/run/netns/cni-c95e2923-a1a0-40d7-ec53-624eec34b7f1" Mar 12 01:39:17.636805 containerd[1457]: 2026-03-12 01:39:17.447 [INFO][3771] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="dc5f368c1ed70f47cc9a8ac54966e32621eb9a4de80e4e7c05b4ebc20dcdcef2" iface="eth0" netns="/var/run/netns/cni-c95e2923-a1a0-40d7-ec53-624eec34b7f1" Mar 12 01:39:17.636805 containerd[1457]: 2026-03-12 01:39:17.448 [INFO][3771] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="dc5f368c1ed70f47cc9a8ac54966e32621eb9a4de80e4e7c05b4ebc20dcdcef2" iface="eth0" netns="/var/run/netns/cni-c95e2923-a1a0-40d7-ec53-624eec34b7f1" Mar 12 01:39:17.636805 containerd[1457]: 2026-03-12 01:39:17.448 [INFO][3771] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="dc5f368c1ed70f47cc9a8ac54966e32621eb9a4de80e4e7c05b4ebc20dcdcef2" Mar 12 01:39:17.636805 containerd[1457]: 2026-03-12 01:39:17.448 [INFO][3771] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="dc5f368c1ed70f47cc9a8ac54966e32621eb9a4de80e4e7c05b4ebc20dcdcef2" Mar 12 01:39:17.636805 containerd[1457]: 2026-03-12 01:39:17.562 [INFO][3859] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="dc5f368c1ed70f47cc9a8ac54966e32621eb9a4de80e4e7c05b4ebc20dcdcef2" HandleID="k8s-pod-network.dc5f368c1ed70f47cc9a8ac54966e32621eb9a4de80e4e7c05b4ebc20dcdcef2" Workload="localhost-k8s-calico--kube--controllers--fcfbfb698--xmdzd-eth0" Mar 12 01:39:17.636805 containerd[1457]: 2026-03-12 01:39:17.562 [INFO][3859] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 01:39:17.636805 containerd[1457]: 2026-03-12 01:39:17.603 [INFO][3859] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 01:39:17.636805 containerd[1457]: 2026-03-12 01:39:17.611 [WARNING][3859] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="dc5f368c1ed70f47cc9a8ac54966e32621eb9a4de80e4e7c05b4ebc20dcdcef2" HandleID="k8s-pod-network.dc5f368c1ed70f47cc9a8ac54966e32621eb9a4de80e4e7c05b4ebc20dcdcef2" Workload="localhost-k8s-calico--kube--controllers--fcfbfb698--xmdzd-eth0" Mar 12 01:39:17.636805 containerd[1457]: 2026-03-12 01:39:17.612 [INFO][3859] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="dc5f368c1ed70f47cc9a8ac54966e32621eb9a4de80e4e7c05b4ebc20dcdcef2" HandleID="k8s-pod-network.dc5f368c1ed70f47cc9a8ac54966e32621eb9a4de80e4e7c05b4ebc20dcdcef2" Workload="localhost-k8s-calico--kube--controllers--fcfbfb698--xmdzd-eth0" Mar 12 01:39:17.636805 containerd[1457]: 2026-03-12 01:39:17.617 [INFO][3859] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 01:39:17.636805 containerd[1457]: 2026-03-12 01:39:17.627 [INFO][3771] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="dc5f368c1ed70f47cc9a8ac54966e32621eb9a4de80e4e7c05b4ebc20dcdcef2" Mar 12 01:39:17.637889 containerd[1457]: time="2026-03-12T01:39:17.637583501Z" level=info msg="TearDown network for sandbox \"dc5f368c1ed70f47cc9a8ac54966e32621eb9a4de80e4e7c05b4ebc20dcdcef2\" successfully" Mar 12 01:39:17.637889 containerd[1457]: time="2026-03-12T01:39:17.637608060Z" level=info msg="StopPodSandbox for \"dc5f368c1ed70f47cc9a8ac54966e32621eb9a4de80e4e7c05b4ebc20dcdcef2\" returns successfully" Mar 12 01:39:17.643213 containerd[1457]: time="2026-03-12T01:39:17.643000661Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-fcfbfb698-xmdzd,Uid:59eed31b-e704-4270-b07c-3c68fa6fc47c,Namespace:calico-system,Attempt:1,}" Mar 12 01:39:17.654422 containerd[1457]: 2026-03-12 01:39:17.463 [INFO][3776] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="67d8897535a6b9de6210033cde4f03da90a1e25ba17b1d8bbfa0e0aca9811ba2" Mar 12 01:39:17.654422 containerd[1457]: 2026-03-12 01:39:17.463 [INFO][3776] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="67d8897535a6b9de6210033cde4f03da90a1e25ba17b1d8bbfa0e0aca9811ba2" iface="eth0" netns="/var/run/netns/cni-4b2d1b94-a7b0-f0cc-edf7-16deee65ed8f" Mar 12 01:39:17.654422 containerd[1457]: 2026-03-12 01:39:17.464 [INFO][3776] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="67d8897535a6b9de6210033cde4f03da90a1e25ba17b1d8bbfa0e0aca9811ba2" iface="eth0" netns="/var/run/netns/cni-4b2d1b94-a7b0-f0cc-edf7-16deee65ed8f" Mar 12 01:39:17.654422 containerd[1457]: 2026-03-12 01:39:17.464 [INFO][3776] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="67d8897535a6b9de6210033cde4f03da90a1e25ba17b1d8bbfa0e0aca9811ba2" iface="eth0" netns="/var/run/netns/cni-4b2d1b94-a7b0-f0cc-edf7-16deee65ed8f" Mar 12 01:39:17.654422 containerd[1457]: 2026-03-12 01:39:17.464 [INFO][3776] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="67d8897535a6b9de6210033cde4f03da90a1e25ba17b1d8bbfa0e0aca9811ba2" Mar 12 01:39:17.654422 containerd[1457]: 2026-03-12 01:39:17.464 [INFO][3776] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="67d8897535a6b9de6210033cde4f03da90a1e25ba17b1d8bbfa0e0aca9811ba2" Mar 12 01:39:17.654422 containerd[1457]: 2026-03-12 01:39:17.563 [INFO][3869] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="67d8897535a6b9de6210033cde4f03da90a1e25ba17b1d8bbfa0e0aca9811ba2" HandleID="k8s-pod-network.67d8897535a6b9de6210033cde4f03da90a1e25ba17b1d8bbfa0e0aca9811ba2" Workload="localhost-k8s-csi--node--driver--r96sx-eth0" Mar 12 01:39:17.654422 containerd[1457]: 2026-03-12 01:39:17.563 [INFO][3869] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 01:39:17.654422 containerd[1457]: 2026-03-12 01:39:17.616 [INFO][3869] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 01:39:17.654422 containerd[1457]: 2026-03-12 01:39:17.628 [WARNING][3869] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="67d8897535a6b9de6210033cde4f03da90a1e25ba17b1d8bbfa0e0aca9811ba2" HandleID="k8s-pod-network.67d8897535a6b9de6210033cde4f03da90a1e25ba17b1d8bbfa0e0aca9811ba2" Workload="localhost-k8s-csi--node--driver--r96sx-eth0" Mar 12 01:39:17.654422 containerd[1457]: 2026-03-12 01:39:17.628 [INFO][3869] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="67d8897535a6b9de6210033cde4f03da90a1e25ba17b1d8bbfa0e0aca9811ba2" HandleID="k8s-pod-network.67d8897535a6b9de6210033cde4f03da90a1e25ba17b1d8bbfa0e0aca9811ba2" Workload="localhost-k8s-csi--node--driver--r96sx-eth0" Mar 12 01:39:17.654422 containerd[1457]: 2026-03-12 01:39:17.631 [INFO][3869] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 01:39:17.654422 containerd[1457]: 2026-03-12 01:39:17.644 [INFO][3776] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="67d8897535a6b9de6210033cde4f03da90a1e25ba17b1d8bbfa0e0aca9811ba2" Mar 12 01:39:17.659853 containerd[1457]: time="2026-03-12T01:39:17.659818718Z" level=info msg="TearDown network for sandbox \"67d8897535a6b9de6210033cde4f03da90a1e25ba17b1d8bbfa0e0aca9811ba2\" successfully" Mar 12 01:39:17.660104 containerd[1457]: time="2026-03-12T01:39:17.660001136Z" level=info msg="StopPodSandbox for \"67d8897535a6b9de6210033cde4f03da90a1e25ba17b1d8bbfa0e0aca9811ba2\" returns successfully" Mar 12 01:39:17.669336 containerd[1457]: time="2026-03-12T01:39:17.669289597Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-r96sx,Uid:4350a8ed-9db0-4145-8365-af9918373d13,Namespace:calico-system,Attempt:1,}" Mar 12 01:39:17.669899 containerd[1457]: 2026-03-12 01:39:17.456 [INFO][3715] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="b120445665233ef642fc7999c6ee5c8b2bc022af3ce8da583c99e82236f2e4e4" Mar 12 01:39:17.669899 containerd[1457]: 2026-03-12 01:39:17.456 [INFO][3715] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b120445665233ef642fc7999c6ee5c8b2bc022af3ce8da583c99e82236f2e4e4" iface="eth0" netns="/var/run/netns/cni-7f0d5588-12b5-0e6f-2daa-48a96fb911eb" Mar 12 01:39:17.669899 containerd[1457]: 2026-03-12 01:39:17.456 [INFO][3715] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b120445665233ef642fc7999c6ee5c8b2bc022af3ce8da583c99e82236f2e4e4" iface="eth0" netns="/var/run/netns/cni-7f0d5588-12b5-0e6f-2daa-48a96fb911eb" Mar 12 01:39:17.669899 containerd[1457]: 2026-03-12 01:39:17.460 [INFO][3715] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="b120445665233ef642fc7999c6ee5c8b2bc022af3ce8da583c99e82236f2e4e4" iface="eth0" netns="/var/run/netns/cni-7f0d5588-12b5-0e6f-2daa-48a96fb911eb" Mar 12 01:39:17.669899 containerd[1457]: 2026-03-12 01:39:17.460 [INFO][3715] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="b120445665233ef642fc7999c6ee5c8b2bc022af3ce8da583c99e82236f2e4e4" Mar 12 01:39:17.669899 containerd[1457]: 2026-03-12 01:39:17.460 [INFO][3715] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="b120445665233ef642fc7999c6ee5c8b2bc022af3ce8da583c99e82236f2e4e4" Mar 12 01:39:17.669899 containerd[1457]: 2026-03-12 01:39:17.566 [INFO][3868] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="b120445665233ef642fc7999c6ee5c8b2bc022af3ce8da583c99e82236f2e4e4" HandleID="k8s-pod-network.b120445665233ef642fc7999c6ee5c8b2bc022af3ce8da583c99e82236f2e4e4" Workload="localhost-k8s-coredns--7d764666f9--pjnrc-eth0" Mar 12 01:39:17.669899 containerd[1457]: 2026-03-12 01:39:17.567 [INFO][3868] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 01:39:17.669899 containerd[1457]: 2026-03-12 01:39:17.630 [INFO][3868] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 01:39:17.669899 containerd[1457]: 2026-03-12 01:39:17.644 [WARNING][3868] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="b120445665233ef642fc7999c6ee5c8b2bc022af3ce8da583c99e82236f2e4e4" HandleID="k8s-pod-network.b120445665233ef642fc7999c6ee5c8b2bc022af3ce8da583c99e82236f2e4e4" Workload="localhost-k8s-coredns--7d764666f9--pjnrc-eth0" Mar 12 01:39:17.669899 containerd[1457]: 2026-03-12 01:39:17.644 [INFO][3868] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="b120445665233ef642fc7999c6ee5c8b2bc022af3ce8da583c99e82236f2e4e4" HandleID="k8s-pod-network.b120445665233ef642fc7999c6ee5c8b2bc022af3ce8da583c99e82236f2e4e4" Workload="localhost-k8s-coredns--7d764666f9--pjnrc-eth0" Mar 12 01:39:17.669899 containerd[1457]: 2026-03-12 01:39:17.647 [INFO][3868] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 01:39:17.669899 containerd[1457]: 2026-03-12 01:39:17.657 [INFO][3715] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="b120445665233ef642fc7999c6ee5c8b2bc022af3ce8da583c99e82236f2e4e4" Mar 12 01:39:17.670413 containerd[1457]: time="2026-03-12T01:39:17.670393443Z" level=info msg="TearDown network for sandbox \"b120445665233ef642fc7999c6ee5c8b2bc022af3ce8da583c99e82236f2e4e4\" successfully" Mar 12 01:39:17.670594 containerd[1457]: time="2026-03-12T01:39:17.670462639Z" level=info msg="StopPodSandbox for \"b120445665233ef642fc7999c6ee5c8b2bc022af3ce8da583c99e82236f2e4e4\" returns successfully" Mar 12 01:39:17.673905 kubelet[2511]: E0312 01:39:17.673874 2511 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:39:17.678058 containerd[1457]: time="2026-03-12T01:39:17.677708717Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-pjnrc,Uid:6ac744d1-2bf6-4481-882f-f786a1600883,Namespace:kube-system,Attempt:1,}" Mar 12 01:39:17.687701 containerd[1457]: 2026-03-12 01:39:17.474 [INFO][3801] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="e08fb2d5984d3f19acd459d4693d92f4c95fdfe6f61e1cbe65460431321bda4f" Mar 12 01:39:17.687701 containerd[1457]: 2026-03-12 01:39:17.476 [INFO][3801] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="e08fb2d5984d3f19acd459d4693d92f4c95fdfe6f61e1cbe65460431321bda4f" iface="eth0" netns="/var/run/netns/cni-7f6743c7-b113-6e56-fab7-bec3618287a3" Mar 12 01:39:17.687701 containerd[1457]: 2026-03-12 01:39:17.476 [INFO][3801] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="e08fb2d5984d3f19acd459d4693d92f4c95fdfe6f61e1cbe65460431321bda4f" iface="eth0" netns="/var/run/netns/cni-7f6743c7-b113-6e56-fab7-bec3618287a3" Mar 12 01:39:17.687701 containerd[1457]: 2026-03-12 01:39:17.476 [INFO][3801] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="e08fb2d5984d3f19acd459d4693d92f4c95fdfe6f61e1cbe65460431321bda4f" iface="eth0" netns="/var/run/netns/cni-7f6743c7-b113-6e56-fab7-bec3618287a3" Mar 12 01:39:17.687701 containerd[1457]: 2026-03-12 01:39:17.476 [INFO][3801] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="e08fb2d5984d3f19acd459d4693d92f4c95fdfe6f61e1cbe65460431321bda4f" Mar 12 01:39:17.687701 containerd[1457]: 2026-03-12 01:39:17.476 [INFO][3801] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="e08fb2d5984d3f19acd459d4693d92f4c95fdfe6f61e1cbe65460431321bda4f" Mar 12 01:39:17.687701 containerd[1457]: 2026-03-12 01:39:17.568 [INFO][3881] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="e08fb2d5984d3f19acd459d4693d92f4c95fdfe6f61e1cbe65460431321bda4f" HandleID="k8s-pod-network.e08fb2d5984d3f19acd459d4693d92f4c95fdfe6f61e1cbe65460431321bda4f" Workload="localhost-k8s-calico--apiserver--88f858578--8cj8d-eth0" Mar 12 01:39:17.687701 containerd[1457]: 2026-03-12 01:39:17.568 [INFO][3881] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 01:39:17.687701 containerd[1457]: 2026-03-12 01:39:17.647 [INFO][3881] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 01:39:17.687701 containerd[1457]: 2026-03-12 01:39:17.665 [WARNING][3881] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="e08fb2d5984d3f19acd459d4693d92f4c95fdfe6f61e1cbe65460431321bda4f" HandleID="k8s-pod-network.e08fb2d5984d3f19acd459d4693d92f4c95fdfe6f61e1cbe65460431321bda4f" Workload="localhost-k8s-calico--apiserver--88f858578--8cj8d-eth0" Mar 12 01:39:17.687701 containerd[1457]: 2026-03-12 01:39:17.665 [INFO][3881] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="e08fb2d5984d3f19acd459d4693d92f4c95fdfe6f61e1cbe65460431321bda4f" HandleID="k8s-pod-network.e08fb2d5984d3f19acd459d4693d92f4c95fdfe6f61e1cbe65460431321bda4f" Workload="localhost-k8s-calico--apiserver--88f858578--8cj8d-eth0" Mar 12 01:39:17.687701 containerd[1457]: 2026-03-12 01:39:17.668 [INFO][3881] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 01:39:17.687701 containerd[1457]: 2026-03-12 01:39:17.679 [INFO][3801] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="e08fb2d5984d3f19acd459d4693d92f4c95fdfe6f61e1cbe65460431321bda4f" Mar 12 01:39:17.689205 containerd[1457]: time="2026-03-12T01:39:17.688568357Z" level=info msg="TearDown network for sandbox \"e08fb2d5984d3f19acd459d4693d92f4c95fdfe6f61e1cbe65460431321bda4f\" successfully" Mar 12 01:39:17.689205 containerd[1457]: time="2026-03-12T01:39:17.688593065Z" level=info msg="StopPodSandbox for \"e08fb2d5984d3f19acd459d4693d92f4c95fdfe6f61e1cbe65460431321bda4f\" returns successfully" Mar 12 01:39:17.693938 containerd[1457]: time="2026-03-12T01:39:17.693917544Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-88f858578-8cj8d,Uid:ce2d517a-d1f9-487b-8064-201ed4846645,Namespace:calico-system,Attempt:1,}" Mar 12 01:39:17.752742 kubelet[2511]: I0312 01:39:17.752709 2511 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/secret/1d5c8986-3e41-46b9-9e79-5445416fc70d-whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/1d5c8986-3e41-46b9-9e79-5445416fc70d-whisker-backend-key-pair\") pod \"1d5c8986-3e41-46b9-9e79-5445416fc70d\" (UID: \"1d5c8986-3e41-46b9-9e79-5445416fc70d\") " Mar 12 01:39:17.753838 kubelet[2511]: I0312 01:39:17.752938 2511 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/configmap/1d5c8986-3e41-46b9-9e79-5445416fc70d-whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1d5c8986-3e41-46b9-9e79-5445416fc70d-whisker-ca-bundle\") pod \"1d5c8986-3e41-46b9-9e79-5445416fc70d\" (UID: \"1d5c8986-3e41-46b9-9e79-5445416fc70d\") " Mar 12 01:39:17.753838 kubelet[2511]: I0312 01:39:17.753014 2511 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/projected/1d5c8986-3e41-46b9-9e79-5445416fc70d-kube-api-access-jrs2d\" (UniqueName: \"kubernetes.io/projected/1d5c8986-3e41-46b9-9e79-5445416fc70d-kube-api-access-jrs2d\") pod \"1d5c8986-3e41-46b9-9e79-5445416fc70d\" (UID: \"1d5c8986-3e41-46b9-9e79-5445416fc70d\") " Mar 12 01:39:17.753838 kubelet[2511]: I0312 01:39:17.753043 2511 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/configmap/1d5c8986-3e41-46b9-9e79-5445416fc70d-nginx-config\" (UniqueName: \"kubernetes.io/configmap/1d5c8986-3e41-46b9-9e79-5445416fc70d-nginx-config\") pod \"1d5c8986-3e41-46b9-9e79-5445416fc70d\" (UID: \"1d5c8986-3e41-46b9-9e79-5445416fc70d\") " Mar 12 01:39:17.753838 kubelet[2511]: I0312 01:39:17.753436 2511 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1d5c8986-3e41-46b9-9e79-5445416fc70d-nginx-config" pod "1d5c8986-3e41-46b9-9e79-5445416fc70d" (UID: "1d5c8986-3e41-46b9-9e79-5445416fc70d"). InnerVolumeSpecName "nginx-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 12 01:39:17.755941 kubelet[2511]: I0312 01:39:17.755871 2511 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1d5c8986-3e41-46b9-9e79-5445416fc70d-whisker-ca-bundle" pod "1d5c8986-3e41-46b9-9e79-5445416fc70d" (UID: "1d5c8986-3e41-46b9-9e79-5445416fc70d"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 12 01:39:17.762561 kubelet[2511]: I0312 01:39:17.762463 2511 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1d5c8986-3e41-46b9-9e79-5445416fc70d-whisker-backend-key-pair" pod "1d5c8986-3e41-46b9-9e79-5445416fc70d" (UID: "1d5c8986-3e41-46b9-9e79-5445416fc70d"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 12 01:39:17.763892 kubelet[2511]: I0312 01:39:17.763832 2511 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d5c8986-3e41-46b9-9e79-5445416fc70d-kube-api-access-jrs2d" pod "1d5c8986-3e41-46b9-9e79-5445416fc70d" (UID: "1d5c8986-3e41-46b9-9e79-5445416fc70d"). InnerVolumeSpecName "kube-api-access-jrs2d". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 12 01:39:17.849523 systemd-networkd[1381]: califb758cf90b7: Link UP Mar 12 01:39:17.853873 kubelet[2511]: I0312 01:39:17.853762 2511 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/1d5c8986-3e41-46b9-9e79-5445416fc70d-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Mar 12 01:39:17.853873 kubelet[2511]: I0312 01:39:17.853810 2511 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1d5c8986-3e41-46b9-9e79-5445416fc70d-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Mar 12 01:39:17.853873 kubelet[2511]: I0312 01:39:17.853819 2511 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-jrs2d\" (UniqueName: \"kubernetes.io/projected/1d5c8986-3e41-46b9-9e79-5445416fc70d-kube-api-access-jrs2d\") on node \"localhost\" DevicePath \"\"" Mar 12 01:39:17.853873 kubelet[2511]: I0312 01:39:17.853828 2511 reconciler_common.go:299] "Volume detached for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/1d5c8986-3e41-46b9-9e79-5445416fc70d-nginx-config\") on node \"localhost\" DevicePath \"\"" Mar 12 01:39:17.860845 systemd-networkd[1381]: califb758cf90b7: Gained carrier Mar 12 01:39:17.897985 containerd[1457]: 2026-03-12 01:39:17.676 [ERROR][3921] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 12 01:39:17.897985 containerd[1457]: 2026-03-12 01:39:17.703 [INFO][3921] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--88f858578--xkxvg-eth0 calico-apiserver-88f858578- calico-system 26e1b944-92fc-431c-8345-7c464f156745 951 0 2026-03-12 01:39:01 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:88f858578 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-88f858578-xkxvg eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] califb758cf90b7 [] [] }} ContainerID="6540db3d0a532d028cac914e0f3671a8e7743b2b65e6517aa1ecfb9220e155f2" Namespace="calico-system" Pod="calico-apiserver-88f858578-xkxvg" WorkloadEndpoint="localhost-k8s-calico--apiserver--88f858578--xkxvg-" Mar 12 01:39:17.897985 containerd[1457]: 2026-03-12 01:39:17.703 [INFO][3921] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6540db3d0a532d028cac914e0f3671a8e7743b2b65e6517aa1ecfb9220e155f2" Namespace="calico-system" Pod="calico-apiserver-88f858578-xkxvg" WorkloadEndpoint="localhost-k8s-calico--apiserver--88f858578--xkxvg-eth0" Mar 12 01:39:17.897985 containerd[1457]: 2026-03-12 01:39:17.766 [INFO][3968] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6540db3d0a532d028cac914e0f3671a8e7743b2b65e6517aa1ecfb9220e155f2" HandleID="k8s-pod-network.6540db3d0a532d028cac914e0f3671a8e7743b2b65e6517aa1ecfb9220e155f2" Workload="localhost-k8s-calico--apiserver--88f858578--xkxvg-eth0" Mar 12 01:39:17.897985 containerd[1457]: 2026-03-12 01:39:17.778 [INFO][3968] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="6540db3d0a532d028cac914e0f3671a8e7743b2b65e6517aa1ecfb9220e155f2" HandleID="k8s-pod-network.6540db3d0a532d028cac914e0f3671a8e7743b2b65e6517aa1ecfb9220e155f2" Workload="localhost-k8s-calico--apiserver--88f858578--xkxvg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000366640), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-apiserver-88f858578-xkxvg", "timestamp":"2026-03-12 01:39:17.766112639 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000620c60)} Mar 12 01:39:17.897985 containerd[1457]: 2026-03-12 01:39:17.778 [INFO][3968] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 01:39:17.897985 containerd[1457]: 2026-03-12 01:39:17.778 [INFO][3968] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 01:39:17.897985 containerd[1457]: 2026-03-12 01:39:17.778 [INFO][3968] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 12 01:39:17.897985 containerd[1457]: 2026-03-12 01:39:17.783 [INFO][3968] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.6540db3d0a532d028cac914e0f3671a8e7743b2b65e6517aa1ecfb9220e155f2" host="localhost" Mar 12 01:39:17.897985 containerd[1457]: 2026-03-12 01:39:17.792 [INFO][3968] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 12 01:39:17.897985 containerd[1457]: 2026-03-12 01:39:17.799 [INFO][3968] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 12 01:39:17.897985 containerd[1457]: 2026-03-12 01:39:17.802 [INFO][3968] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 12 01:39:17.897985 containerd[1457]: 2026-03-12 01:39:17.805 [INFO][3968] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 12 01:39:17.897985 containerd[1457]: 2026-03-12 01:39:17.805 [INFO][3968] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.6540db3d0a532d028cac914e0f3671a8e7743b2b65e6517aa1ecfb9220e155f2" host="localhost" Mar 12 01:39:17.897985 containerd[1457]: 2026-03-12 01:39:17.807 [INFO][3968] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.6540db3d0a532d028cac914e0f3671a8e7743b2b65e6517aa1ecfb9220e155f2 Mar 12 01:39:17.897985 containerd[1457]: 2026-03-12 01:39:17.816 [INFO][3968] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.6540db3d0a532d028cac914e0f3671a8e7743b2b65e6517aa1ecfb9220e155f2" host="localhost" Mar 12 01:39:17.897985 containerd[1457]: 2026-03-12 01:39:17.824 [INFO][3968] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.6540db3d0a532d028cac914e0f3671a8e7743b2b65e6517aa1ecfb9220e155f2" host="localhost" Mar 12 01:39:17.897985 containerd[1457]: 2026-03-12 01:39:17.824 [INFO][3968] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.6540db3d0a532d028cac914e0f3671a8e7743b2b65e6517aa1ecfb9220e155f2" host="localhost" Mar 12 01:39:17.897985 containerd[1457]: 2026-03-12 01:39:17.824 [INFO][3968] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 01:39:17.897985 containerd[1457]: 2026-03-12 01:39:17.824 [INFO][3968] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="6540db3d0a532d028cac914e0f3671a8e7743b2b65e6517aa1ecfb9220e155f2" HandleID="k8s-pod-network.6540db3d0a532d028cac914e0f3671a8e7743b2b65e6517aa1ecfb9220e155f2" Workload="localhost-k8s-calico--apiserver--88f858578--xkxvg-eth0" Mar 12 01:39:17.899093 containerd[1457]: 2026-03-12 01:39:17.833 [INFO][3921] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6540db3d0a532d028cac914e0f3671a8e7743b2b65e6517aa1ecfb9220e155f2" Namespace="calico-system" Pod="calico-apiserver-88f858578-xkxvg" WorkloadEndpoint="localhost-k8s-calico--apiserver--88f858578--xkxvg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--88f858578--xkxvg-eth0", GenerateName:"calico-apiserver-88f858578-", Namespace:"calico-system", SelfLink:"", UID:"26e1b944-92fc-431c-8345-7c464f156745", ResourceVersion:"951", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 1, 39, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"88f858578", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-88f858578-xkxvg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"califb758cf90b7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 01:39:17.899093 containerd[1457]: 2026-03-12 01:39:17.833 [INFO][3921] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="6540db3d0a532d028cac914e0f3671a8e7743b2b65e6517aa1ecfb9220e155f2" Namespace="calico-system" Pod="calico-apiserver-88f858578-xkxvg" WorkloadEndpoint="localhost-k8s-calico--apiserver--88f858578--xkxvg-eth0" Mar 12 01:39:17.899093 containerd[1457]: 2026-03-12 01:39:17.834 [INFO][3921] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califb758cf90b7 ContainerID="6540db3d0a532d028cac914e0f3671a8e7743b2b65e6517aa1ecfb9220e155f2" Namespace="calico-system" Pod="calico-apiserver-88f858578-xkxvg" WorkloadEndpoint="localhost-k8s-calico--apiserver--88f858578--xkxvg-eth0" Mar 12 01:39:17.899093 containerd[1457]: 2026-03-12 01:39:17.867 [INFO][3921] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6540db3d0a532d028cac914e0f3671a8e7743b2b65e6517aa1ecfb9220e155f2" Namespace="calico-system" Pod="calico-apiserver-88f858578-xkxvg" WorkloadEndpoint="localhost-k8s-calico--apiserver--88f858578--xkxvg-eth0" Mar 12 01:39:17.899093 containerd[1457]: 2026-03-12 01:39:17.868 [INFO][3921] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6540db3d0a532d028cac914e0f3671a8e7743b2b65e6517aa1ecfb9220e155f2" Namespace="calico-system" Pod="calico-apiserver-88f858578-xkxvg" WorkloadEndpoint="localhost-k8s-calico--apiserver--88f858578--xkxvg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--88f858578--xkxvg-eth0", GenerateName:"calico-apiserver-88f858578-", Namespace:"calico-system", SelfLink:"", UID:"26e1b944-92fc-431c-8345-7c464f156745", ResourceVersion:"951", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 1, 39, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"88f858578", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6540db3d0a532d028cac914e0f3671a8e7743b2b65e6517aa1ecfb9220e155f2", Pod:"calico-apiserver-88f858578-xkxvg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"califb758cf90b7", MAC:"4e:2f:9f:8c:8f:fa", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 01:39:17.899093 containerd[1457]: 2026-03-12 01:39:17.888 [INFO][3921] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6540db3d0a532d028cac914e0f3671a8e7743b2b65e6517aa1ecfb9220e155f2" Namespace="calico-system" Pod="calico-apiserver-88f858578-xkxvg" WorkloadEndpoint="localhost-k8s-calico--apiserver--88f858578--xkxvg-eth0" Mar 12 01:39:17.930692 containerd[1457]: time="2026-03-12T01:39:17.930193151Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 12 01:39:17.930692 containerd[1457]: time="2026-03-12T01:39:17.930342305Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 12 01:39:17.930692 containerd[1457]: time="2026-03-12T01:39:17.930398925Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 01:39:17.930692 containerd[1457]: time="2026-03-12T01:39:17.930526535Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 01:39:17.957406 systemd-networkd[1381]: califca6a1ae899: Link UP Mar 12 01:39:17.958865 systemd-networkd[1381]: califca6a1ae899: Gained carrier Mar 12 01:39:17.959820 systemd[1]: Started cri-containerd-6540db3d0a532d028cac914e0f3671a8e7743b2b65e6517aa1ecfb9220e155f2.scope - libcontainer container 6540db3d0a532d028cac914e0f3671a8e7743b2b65e6517aa1ecfb9220e155f2. Mar 12 01:39:17.979431 containerd[1457]: 2026-03-12 01:39:17.643 [ERROR][3907] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 12 01:39:17.979431 containerd[1457]: 2026-03-12 01:39:17.665 [INFO][3907] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--9f7667bb8--xzc4r-eth0 goldmane-9f7667bb8- calico-system 6337dcae-e8ff-47d2-900a-5c71524380d4 952 0 2026-03-12 01:39:01 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:9f7667bb8 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-9f7667bb8-xzc4r eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] califca6a1ae899 [] [] }} ContainerID="42784e68c4c63ea116f6fb50be158622a08aa07a8657f9914b0d288f3a78925d" Namespace="calico-system" Pod="goldmane-9f7667bb8-xzc4r" WorkloadEndpoint="localhost-k8s-goldmane--9f7667bb8--xzc4r-" Mar 12 01:39:17.979431 containerd[1457]: 2026-03-12 01:39:17.665 [INFO][3907] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="42784e68c4c63ea116f6fb50be158622a08aa07a8657f9914b0d288f3a78925d" Namespace="calico-system" Pod="goldmane-9f7667bb8-xzc4r" WorkloadEndpoint="localhost-k8s-goldmane--9f7667bb8--xzc4r-eth0" Mar 12 01:39:17.979431 containerd[1457]: 2026-03-12 01:39:17.768 [INFO][3937] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="42784e68c4c63ea116f6fb50be158622a08aa07a8657f9914b0d288f3a78925d" HandleID="k8s-pod-network.42784e68c4c63ea116f6fb50be158622a08aa07a8657f9914b0d288f3a78925d" Workload="localhost-k8s-goldmane--9f7667bb8--xzc4r-eth0" Mar 12 01:39:17.979431 containerd[1457]: 2026-03-12 01:39:17.788 [INFO][3937] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="42784e68c4c63ea116f6fb50be158622a08aa07a8657f9914b0d288f3a78925d" HandleID="k8s-pod-network.42784e68c4c63ea116f6fb50be158622a08aa07a8657f9914b0d288f3a78925d" Workload="localhost-k8s-goldmane--9f7667bb8--xzc4r-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00019fba0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-9f7667bb8-xzc4r", "timestamp":"2026-03-12 01:39:17.768129153 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0001e66e0)} Mar 12 01:39:17.979431 containerd[1457]: 2026-03-12 01:39:17.788 [INFO][3937] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 01:39:17.979431 containerd[1457]: 2026-03-12 01:39:17.826 [INFO][3937] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 01:39:17.979431 containerd[1457]: 2026-03-12 01:39:17.826 [INFO][3937] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 12 01:39:17.979431 containerd[1457]: 2026-03-12 01:39:17.882 [INFO][3937] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.42784e68c4c63ea116f6fb50be158622a08aa07a8657f9914b0d288f3a78925d" host="localhost" Mar 12 01:39:17.979431 containerd[1457]: 2026-03-12 01:39:17.898 [INFO][3937] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 12 01:39:17.979431 containerd[1457]: 2026-03-12 01:39:17.912 [INFO][3937] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 12 01:39:17.979431 containerd[1457]: 2026-03-12 01:39:17.920 [INFO][3937] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 12 01:39:17.979431 containerd[1457]: 2026-03-12 01:39:17.925 [INFO][3937] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 12 01:39:17.979431 containerd[1457]: 2026-03-12 01:39:17.925 [INFO][3937] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.42784e68c4c63ea116f6fb50be158622a08aa07a8657f9914b0d288f3a78925d" host="localhost" Mar 12 01:39:17.979431 containerd[1457]: 2026-03-12 01:39:17.927 [INFO][3937] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.42784e68c4c63ea116f6fb50be158622a08aa07a8657f9914b0d288f3a78925d Mar 12 01:39:17.979431 containerd[1457]: 2026-03-12 01:39:17.933 [INFO][3937] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.42784e68c4c63ea116f6fb50be158622a08aa07a8657f9914b0d288f3a78925d" host="localhost" Mar 12 01:39:17.979431 containerd[1457]: 2026-03-12 01:39:17.943 [INFO][3937] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.42784e68c4c63ea116f6fb50be158622a08aa07a8657f9914b0d288f3a78925d" host="localhost" Mar 12 01:39:17.979431 containerd[1457]: 2026-03-12 01:39:17.944 [INFO][3937] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.42784e68c4c63ea116f6fb50be158622a08aa07a8657f9914b0d288f3a78925d" host="localhost" Mar 12 01:39:17.979431 containerd[1457]: 2026-03-12 01:39:17.946 [INFO][3937] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 01:39:17.979431 containerd[1457]: 2026-03-12 01:39:17.946 [INFO][3937] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="42784e68c4c63ea116f6fb50be158622a08aa07a8657f9914b0d288f3a78925d" HandleID="k8s-pod-network.42784e68c4c63ea116f6fb50be158622a08aa07a8657f9914b0d288f3a78925d" Workload="localhost-k8s-goldmane--9f7667bb8--xzc4r-eth0" Mar 12 01:39:17.980339 containerd[1457]: 2026-03-12 01:39:17.951 [INFO][3907] cni-plugin/k8s.go 418: Populated endpoint ContainerID="42784e68c4c63ea116f6fb50be158622a08aa07a8657f9914b0d288f3a78925d" Namespace="calico-system" Pod="goldmane-9f7667bb8-xzc4r" WorkloadEndpoint="localhost-k8s-goldmane--9f7667bb8--xzc4r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--9f7667bb8--xzc4r-eth0", GenerateName:"goldmane-9f7667bb8-", Namespace:"calico-system", SelfLink:"", UID:"6337dcae-e8ff-47d2-900a-5c71524380d4", ResourceVersion:"952", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 1, 39, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"9f7667bb8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-9f7667bb8-xzc4r", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"califca6a1ae899", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 01:39:17.980339 containerd[1457]: 2026-03-12 01:39:17.951 [INFO][3907] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="42784e68c4c63ea116f6fb50be158622a08aa07a8657f9914b0d288f3a78925d" Namespace="calico-system" Pod="goldmane-9f7667bb8-xzc4r" WorkloadEndpoint="localhost-k8s-goldmane--9f7667bb8--xzc4r-eth0" Mar 12 01:39:17.980339 containerd[1457]: 2026-03-12 01:39:17.951 [INFO][3907] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califca6a1ae899 ContainerID="42784e68c4c63ea116f6fb50be158622a08aa07a8657f9914b0d288f3a78925d" Namespace="calico-system" Pod="goldmane-9f7667bb8-xzc4r" WorkloadEndpoint="localhost-k8s-goldmane--9f7667bb8--xzc4r-eth0" Mar 12 01:39:17.980339 containerd[1457]: 2026-03-12 01:39:17.959 [INFO][3907] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="42784e68c4c63ea116f6fb50be158622a08aa07a8657f9914b0d288f3a78925d" Namespace="calico-system" Pod="goldmane-9f7667bb8-xzc4r" WorkloadEndpoint="localhost-k8s-goldmane--9f7667bb8--xzc4r-eth0" Mar 12 01:39:17.980339 containerd[1457]: 2026-03-12 01:39:17.959 [INFO][3907] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="42784e68c4c63ea116f6fb50be158622a08aa07a8657f9914b0d288f3a78925d" Namespace="calico-system" Pod="goldmane-9f7667bb8-xzc4r" WorkloadEndpoint="localhost-k8s-goldmane--9f7667bb8--xzc4r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--9f7667bb8--xzc4r-eth0", GenerateName:"goldmane-9f7667bb8-", Namespace:"calico-system", SelfLink:"", UID:"6337dcae-e8ff-47d2-900a-5c71524380d4", ResourceVersion:"952", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 1, 39, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"9f7667bb8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"42784e68c4c63ea116f6fb50be158622a08aa07a8657f9914b0d288f3a78925d", Pod:"goldmane-9f7667bb8-xzc4r", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"califca6a1ae899", MAC:"fa:6b:80:bd:d2:c6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 01:39:17.980339 containerd[1457]: 2026-03-12 01:39:17.974 [INFO][3907] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="42784e68c4c63ea116f6fb50be158622a08aa07a8657f9914b0d288f3a78925d" Namespace="calico-system" Pod="goldmane-9f7667bb8-xzc4r" WorkloadEndpoint="localhost-k8s-goldmane--9f7667bb8--xzc4r-eth0" Mar 12 01:39:17.990584 systemd-resolved[1385]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 12 01:39:18.044124 containerd[1457]: time="2026-03-12T01:39:18.043726438Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 12 01:39:18.044124 containerd[1457]: time="2026-03-12T01:39:18.043909377Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 12 01:39:18.044124 containerd[1457]: time="2026-03-12T01:39:18.043924205Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 01:39:18.044124 containerd[1457]: time="2026-03-12T01:39:18.044020023Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 01:39:18.081541 systemd-networkd[1381]: calid5d58159905: Link UP Mar 12 01:39:18.089117 systemd-networkd[1381]: calid5d58159905: Gained carrier Mar 12 01:39:18.095926 systemd[1]: Started cri-containerd-42784e68c4c63ea116f6fb50be158622a08aa07a8657f9914b0d288f3a78925d.scope - libcontainer container 42784e68c4c63ea116f6fb50be158622a08aa07a8657f9914b0d288f3a78925d. Mar 12 01:39:18.110539 containerd[1457]: time="2026-03-12T01:39:18.109854196Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-88f858578-xkxvg,Uid:26e1b944-92fc-431c-8345-7c464f156745,Namespace:calico-system,Attempt:1,} returns sandbox id \"6540db3d0a532d028cac914e0f3671a8e7743b2b65e6517aa1ecfb9220e155f2\"" Mar 12 01:39:18.116014 containerd[1457]: time="2026-03-12T01:39:18.115984936Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Mar 12 01:39:18.133099 systemd[1]: run-netns-cni\x2dc95e2923\x2da1a0\x2d40d7\x2dec53\x2d624eec34b7f1.mount: Deactivated successfully. Mar 12 01:39:18.133211 systemd[1]: run-netns-cni\x2d7f0d5588\x2d12b5\x2d0e6f\x2d2daa\x2d48a96fb911eb.mount: Deactivated successfully. Mar 12 01:39:18.133281 systemd[1]: run-netns-cni\x2d7ecf6d4a\x2df3aa\x2d7813\x2df759\x2d8958d4107ec2.mount: Deactivated successfully. Mar 12 01:39:18.133349 systemd[1]: run-netns-cni\x2dea898124\x2ddb1c\x2dbc86\x2df069\x2dfdfe9af32a69.mount: Deactivated successfully. Mar 12 01:39:18.133423 systemd[1]: run-netns-cni\x2d7f6743c7\x2db113\x2d6e56\x2dfab7\x2dbec3618287a3.mount: Deactivated successfully. Mar 12 01:39:18.133586 systemd[1]: var-lib-kubelet-pods-1d5c8986\x2d3e41\x2d46b9\x2d9e79\x2d5445416fc70d-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2djrs2d.mount: Deactivated successfully. Mar 12 01:39:18.133716 systemd[1]: var-lib-kubelet-pods-1d5c8986\x2d3e41\x2d46b9\x2d9e79\x2d5445416fc70d-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Mar 12 01:39:18.133789 systemd[1]: run-netns-cni\x2d4b2d1b94\x2da7b0\x2df0cc\x2dedf7\x2d16deee65ed8f.mount: Deactivated successfully. Mar 12 01:39:18.147926 containerd[1457]: 2026-03-12 01:39:17.771 [ERROR][3973] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 12 01:39:18.147926 containerd[1457]: 2026-03-12 01:39:17.786 [INFO][3973] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--r96sx-eth0 csi-node-driver- calico-system 4350a8ed-9db0-4145-8365-af9918373d13 955 0 2026-03-12 01:39:01 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:589b8b8d94 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-r96sx eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calid5d58159905 [] [] }} ContainerID="fdbba97a6d0cde850a9fef406712cddf2684ff0e6ee4fbf9ca5acec500820734" Namespace="calico-system" Pod="csi-node-driver-r96sx" WorkloadEndpoint="localhost-k8s-csi--node--driver--r96sx-" Mar 12 01:39:18.147926 containerd[1457]: 2026-03-12 01:39:17.787 [INFO][3973] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="fdbba97a6d0cde850a9fef406712cddf2684ff0e6ee4fbf9ca5acec500820734" Namespace="calico-system" Pod="csi-node-driver-r96sx" WorkloadEndpoint="localhost-k8s-csi--node--driver--r96sx-eth0" Mar 12 01:39:18.147926 containerd[1457]: 2026-03-12 01:39:17.847 [INFO][4025] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="fdbba97a6d0cde850a9fef406712cddf2684ff0e6ee4fbf9ca5acec500820734" HandleID="k8s-pod-network.fdbba97a6d0cde850a9fef406712cddf2684ff0e6ee4fbf9ca5acec500820734" Workload="localhost-k8s-csi--node--driver--r96sx-eth0" Mar 12 01:39:18.147926 containerd[1457]: 2026-03-12 01:39:17.859 [INFO][4025] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="fdbba97a6d0cde850a9fef406712cddf2684ff0e6ee4fbf9ca5acec500820734" HandleID="k8s-pod-network.fdbba97a6d0cde850a9fef406712cddf2684ff0e6ee4fbf9ca5acec500820734" Workload="localhost-k8s-csi--node--driver--r96sx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000133470), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-r96sx", "timestamp":"2026-03-12 01:39:17.847992258 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc00048db80)} Mar 12 01:39:18.147926 containerd[1457]: 2026-03-12 01:39:17.859 [INFO][4025] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 01:39:18.147926 containerd[1457]: 2026-03-12 01:39:17.944 [INFO][4025] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 01:39:18.147926 containerd[1457]: 2026-03-12 01:39:17.944 [INFO][4025] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 12 01:39:18.147926 containerd[1457]: 2026-03-12 01:39:17.984 [INFO][4025] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.fdbba97a6d0cde850a9fef406712cddf2684ff0e6ee4fbf9ca5acec500820734" host="localhost" Mar 12 01:39:18.147926 containerd[1457]: 2026-03-12 01:39:18.005 [INFO][4025] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 12 01:39:18.147926 containerd[1457]: 2026-03-12 01:39:18.023 [INFO][4025] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 12 01:39:18.147926 containerd[1457]: 2026-03-12 01:39:18.029 [INFO][4025] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 12 01:39:18.147926 containerd[1457]: 2026-03-12 01:39:18.034 [INFO][4025] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 12 01:39:18.147926 containerd[1457]: 2026-03-12 01:39:18.035 [INFO][4025] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.fdbba97a6d0cde850a9fef406712cddf2684ff0e6ee4fbf9ca5acec500820734" host="localhost" Mar 12 01:39:18.147926 containerd[1457]: 2026-03-12 01:39:18.039 [INFO][4025] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.fdbba97a6d0cde850a9fef406712cddf2684ff0e6ee4fbf9ca5acec500820734 Mar 12 01:39:18.147926 containerd[1457]: 2026-03-12 01:39:18.046 [INFO][4025] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.fdbba97a6d0cde850a9fef406712cddf2684ff0e6ee4fbf9ca5acec500820734" host="localhost" Mar 12 01:39:18.147926 containerd[1457]: 2026-03-12 01:39:18.056 [INFO][4025] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.fdbba97a6d0cde850a9fef406712cddf2684ff0e6ee4fbf9ca5acec500820734" host="localhost" Mar 12 01:39:18.147926 containerd[1457]: 2026-03-12 01:39:18.057 [INFO][4025] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.fdbba97a6d0cde850a9fef406712cddf2684ff0e6ee4fbf9ca5acec500820734" host="localhost" Mar 12 01:39:18.147926 containerd[1457]: 2026-03-12 01:39:18.057 [INFO][4025] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 01:39:18.147926 containerd[1457]: 2026-03-12 01:39:18.057 [INFO][4025] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="fdbba97a6d0cde850a9fef406712cddf2684ff0e6ee4fbf9ca5acec500820734" HandleID="k8s-pod-network.fdbba97a6d0cde850a9fef406712cddf2684ff0e6ee4fbf9ca5acec500820734" Workload="localhost-k8s-csi--node--driver--r96sx-eth0" Mar 12 01:39:18.148586 containerd[1457]: 2026-03-12 01:39:18.070 [INFO][3973] cni-plugin/k8s.go 418: Populated endpoint ContainerID="fdbba97a6d0cde850a9fef406712cddf2684ff0e6ee4fbf9ca5acec500820734" Namespace="calico-system" Pod="csi-node-driver-r96sx" WorkloadEndpoint="localhost-k8s-csi--node--driver--r96sx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--r96sx-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"4350a8ed-9db0-4145-8365-af9918373d13", ResourceVersion:"955", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 1, 39, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"589b8b8d94", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-r96sx", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calid5d58159905", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 01:39:18.148586 containerd[1457]: 2026-03-12 01:39:18.071 [INFO][3973] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="fdbba97a6d0cde850a9fef406712cddf2684ff0e6ee4fbf9ca5acec500820734" Namespace="calico-system" Pod="csi-node-driver-r96sx" WorkloadEndpoint="localhost-k8s-csi--node--driver--r96sx-eth0" Mar 12 01:39:18.148586 containerd[1457]: 2026-03-12 01:39:18.071 [INFO][3973] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid5d58159905 ContainerID="fdbba97a6d0cde850a9fef406712cddf2684ff0e6ee4fbf9ca5acec500820734" Namespace="calico-system" Pod="csi-node-driver-r96sx" WorkloadEndpoint="localhost-k8s-csi--node--driver--r96sx-eth0" Mar 12 01:39:18.148586 containerd[1457]: 2026-03-12 01:39:18.102 [INFO][3973] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="fdbba97a6d0cde850a9fef406712cddf2684ff0e6ee4fbf9ca5acec500820734" Namespace="calico-system" Pod="csi-node-driver-r96sx" WorkloadEndpoint="localhost-k8s-csi--node--driver--r96sx-eth0" Mar 12 01:39:18.148586 containerd[1457]: 2026-03-12 01:39:18.104 [INFO][3973] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="fdbba97a6d0cde850a9fef406712cddf2684ff0e6ee4fbf9ca5acec500820734" Namespace="calico-system" Pod="csi-node-driver-r96sx" WorkloadEndpoint="localhost-k8s-csi--node--driver--r96sx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--r96sx-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"4350a8ed-9db0-4145-8365-af9918373d13", ResourceVersion:"955", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 1, 39, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"589b8b8d94", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"fdbba97a6d0cde850a9fef406712cddf2684ff0e6ee4fbf9ca5acec500820734", Pod:"csi-node-driver-r96sx", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calid5d58159905", MAC:"36:e9:19:26:6b:81", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 01:39:18.148586 containerd[1457]: 2026-03-12 01:39:18.136 [INFO][3973] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="fdbba97a6d0cde850a9fef406712cddf2684ff0e6ee4fbf9ca5acec500820734" Namespace="calico-system" Pod="csi-node-driver-r96sx" WorkloadEndpoint="localhost-k8s-csi--node--driver--r96sx-eth0" Mar 12 01:39:18.160136 systemd-resolved[1385]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 12 01:39:18.185735 systemd-networkd[1381]: cali0bdf599797e: Link UP Mar 12 01:39:18.188504 systemd-networkd[1381]: cali0bdf599797e: Gained carrier Mar 12 01:39:18.218600 containerd[1457]: 2026-03-12 01:39:17.759 [ERROR][3940] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 12 01:39:18.218600 containerd[1457]: 2026-03-12 01:39:17.790 [INFO][3940] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7d764666f9--tp6j8-eth0 coredns-7d764666f9- kube-system 0da94b48-bfde-434e-b905-92be304e2a09 950 0 2026-03-12 01:38:52 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7d764666f9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7d764666f9-tp6j8 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali0bdf599797e [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="e4f86eb47678149c76c84761879e832969c0b3cddd2c6e87923185ef2870e6ab" Namespace="kube-system" Pod="coredns-7d764666f9-tp6j8" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--tp6j8-" Mar 12 01:39:18.218600 containerd[1457]: 2026-03-12 01:39:17.790 [INFO][3940] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e4f86eb47678149c76c84761879e832969c0b3cddd2c6e87923185ef2870e6ab" Namespace="kube-system" Pod="coredns-7d764666f9-tp6j8" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--tp6j8-eth0" Mar 12 01:39:18.218600 containerd[1457]: 2026-03-12 01:39:17.861 [INFO][4031] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e4f86eb47678149c76c84761879e832969c0b3cddd2c6e87923185ef2870e6ab" HandleID="k8s-pod-network.e4f86eb47678149c76c84761879e832969c0b3cddd2c6e87923185ef2870e6ab" Workload="localhost-k8s-coredns--7d764666f9--tp6j8-eth0" Mar 12 01:39:18.218600 containerd[1457]: 2026-03-12 01:39:17.880 [INFO][4031] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="e4f86eb47678149c76c84761879e832969c0b3cddd2c6e87923185ef2870e6ab" HandleID="k8s-pod-network.e4f86eb47678149c76c84761879e832969c0b3cddd2c6e87923185ef2870e6ab" Workload="localhost-k8s-coredns--7d764666f9--tp6j8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00037dbb0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7d764666f9-tp6j8", "timestamp":"2026-03-12 01:39:17.861527097 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0002114a0)} Mar 12 01:39:18.218600 containerd[1457]: 2026-03-12 01:39:17.881 [INFO][4031] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 01:39:18.218600 containerd[1457]: 2026-03-12 01:39:18.062 [INFO][4031] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 01:39:18.218600 containerd[1457]: 2026-03-12 01:39:18.062 [INFO][4031] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 12 01:39:18.218600 containerd[1457]: 2026-03-12 01:39:18.084 [INFO][4031] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.e4f86eb47678149c76c84761879e832969c0b3cddd2c6e87923185ef2870e6ab" host="localhost" Mar 12 01:39:18.218600 containerd[1457]: 2026-03-12 01:39:18.120 [INFO][4031] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 12 01:39:18.218600 containerd[1457]: 2026-03-12 01:39:18.140 [INFO][4031] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 12 01:39:18.218600 containerd[1457]: 2026-03-12 01:39:18.147 [INFO][4031] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 12 01:39:18.218600 containerd[1457]: 2026-03-12 01:39:18.149 [INFO][4031] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 12 01:39:18.218600 containerd[1457]: 2026-03-12 01:39:18.149 [INFO][4031] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.e4f86eb47678149c76c84761879e832969c0b3cddd2c6e87923185ef2870e6ab" host="localhost" Mar 12 01:39:18.218600 containerd[1457]: 2026-03-12 01:39:18.152 [INFO][4031] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.e4f86eb47678149c76c84761879e832969c0b3cddd2c6e87923185ef2870e6ab Mar 12 01:39:18.218600 containerd[1457]: 2026-03-12 01:39:18.160 [INFO][4031] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.e4f86eb47678149c76c84761879e832969c0b3cddd2c6e87923185ef2870e6ab" host="localhost" Mar 12 01:39:18.218600 containerd[1457]: 2026-03-12 01:39:18.168 [INFO][4031] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.e4f86eb47678149c76c84761879e832969c0b3cddd2c6e87923185ef2870e6ab" host="localhost" Mar 12 01:39:18.218600 containerd[1457]: 2026-03-12 01:39:18.168 [INFO][4031] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.e4f86eb47678149c76c84761879e832969c0b3cddd2c6e87923185ef2870e6ab" host="localhost" Mar 12 01:39:18.218600 containerd[1457]: 2026-03-12 01:39:18.168 [INFO][4031] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 01:39:18.218600 containerd[1457]: 2026-03-12 01:39:18.168 [INFO][4031] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="e4f86eb47678149c76c84761879e832969c0b3cddd2c6e87923185ef2870e6ab" HandleID="k8s-pod-network.e4f86eb47678149c76c84761879e832969c0b3cddd2c6e87923185ef2870e6ab" Workload="localhost-k8s-coredns--7d764666f9--tp6j8-eth0" Mar 12 01:39:18.219748 containerd[1457]: 2026-03-12 01:39:18.178 [INFO][3940] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e4f86eb47678149c76c84761879e832969c0b3cddd2c6e87923185ef2870e6ab" Namespace="kube-system" Pod="coredns-7d764666f9-tp6j8" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--tp6j8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7d764666f9--tp6j8-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"0da94b48-bfde-434e-b905-92be304e2a09", ResourceVersion:"950", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 1, 38, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7d764666f9-tp6j8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0bdf599797e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 01:39:18.219748 containerd[1457]: 2026-03-12 01:39:18.179 [INFO][3940] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="e4f86eb47678149c76c84761879e832969c0b3cddd2c6e87923185ef2870e6ab" Namespace="kube-system" Pod="coredns-7d764666f9-tp6j8" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--tp6j8-eth0" Mar 12 01:39:18.219748 containerd[1457]: 2026-03-12 01:39:18.179 [INFO][3940] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0bdf599797e ContainerID="e4f86eb47678149c76c84761879e832969c0b3cddd2c6e87923185ef2870e6ab" Namespace="kube-system" Pod="coredns-7d764666f9-tp6j8" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--tp6j8-eth0" Mar 12 01:39:18.219748 containerd[1457]: 2026-03-12 01:39:18.195 [INFO][3940] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e4f86eb47678149c76c84761879e832969c0b3cddd2c6e87923185ef2870e6ab" Namespace="kube-system" Pod="coredns-7d764666f9-tp6j8" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--tp6j8-eth0" Mar 12 01:39:18.219748 containerd[1457]: 2026-03-12 01:39:18.197 [INFO][3940] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e4f86eb47678149c76c84761879e832969c0b3cddd2c6e87923185ef2870e6ab" Namespace="kube-system" Pod="coredns-7d764666f9-tp6j8" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--tp6j8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7d764666f9--tp6j8-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"0da94b48-bfde-434e-b905-92be304e2a09", ResourceVersion:"950", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 1, 38, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e4f86eb47678149c76c84761879e832969c0b3cddd2c6e87923185ef2870e6ab", Pod:"coredns-7d764666f9-tp6j8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0bdf599797e", MAC:"26:5c:58:84:4e:9e", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 01:39:18.219748 containerd[1457]: 2026-03-12 01:39:18.211 [INFO][3940] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e4f86eb47678149c76c84761879e832969c0b3cddd2c6e87923185ef2870e6ab" Namespace="kube-system" Pod="coredns-7d764666f9-tp6j8" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--tp6j8-eth0" Mar 12 01:39:18.230820 containerd[1457]: time="2026-03-12T01:39:18.226937899Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 12 01:39:18.230820 containerd[1457]: time="2026-03-12T01:39:18.227000009Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 12 01:39:18.230820 containerd[1457]: time="2026-03-12T01:39:18.227014388Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 01:39:18.230820 containerd[1457]: time="2026-03-12T01:39:18.227100126Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 01:39:18.277132 containerd[1457]: time="2026-03-12T01:39:18.276862532Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-9f7667bb8-xzc4r,Uid:6337dcae-e8ff-47d2-900a-5c71524380d4,Namespace:calico-system,Attempt:1,} returns sandbox id \"42784e68c4c63ea116f6fb50be158622a08aa07a8657f9914b0d288f3a78925d\"" Mar 12 01:39:18.279883 systemd[1]: Started cri-containerd-fdbba97a6d0cde850a9fef406712cddf2684ff0e6ee4fbf9ca5acec500820734.scope - libcontainer container fdbba97a6d0cde850a9fef406712cddf2684ff0e6ee4fbf9ca5acec500820734. Mar 12 01:39:18.292314 systemd[1]: run-containerd-runc-k8s.io-fdbba97a6d0cde850a9fef406712cddf2684ff0e6ee4fbf9ca5acec500820734-runc.U1OUcS.mount: Deactivated successfully. Mar 12 01:39:18.312800 systemd[1]: Removed slice kubepods-besteffort-pod1d5c8986_3e41_46b9_9e79_5445416fc70d.slice - libcontainer container kubepods-besteffort-pod1d5c8986_3e41_46b9_9e79_5445416fc70d.slice. Mar 12 01:39:18.333338 systemd-resolved[1385]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 12 01:39:18.340898 systemd-networkd[1381]: caliccd75f62398: Link UP Mar 12 01:39:18.354437 containerd[1457]: time="2026-03-12T01:39:18.348125033Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 12 01:39:18.354437 containerd[1457]: time="2026-03-12T01:39:18.348173969Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 12 01:39:18.354437 containerd[1457]: time="2026-03-12T01:39:18.349892426Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 01:39:18.354437 containerd[1457]: time="2026-03-12T01:39:18.352821984Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 01:39:18.361098 systemd-networkd[1381]: caliccd75f62398: Gained carrier Mar 12 01:39:18.410936 containerd[1457]: 2026-03-12 01:39:17.759 [ERROR][3954] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 12 01:39:18.410936 containerd[1457]: 2026-03-12 01:39:17.789 [INFO][3954] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--fcfbfb698--xmdzd-eth0 calico-kube-controllers-fcfbfb698- calico-system 59eed31b-e704-4270-b07c-3c68fa6fc47c 953 0 2026-03-12 01:39:02 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:fcfbfb698 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-fcfbfb698-xmdzd eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] caliccd75f62398 [] [] }} ContainerID="ca2b1ac5926a93ecdd2dbaa3ed04a9cae2fca495abd75c0700f066e66a654fa8" Namespace="calico-system" Pod="calico-kube-controllers-fcfbfb698-xmdzd" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--fcfbfb698--xmdzd-" Mar 12 01:39:18.410936 containerd[1457]: 2026-03-12 01:39:17.790 [INFO][3954] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ca2b1ac5926a93ecdd2dbaa3ed04a9cae2fca495abd75c0700f066e66a654fa8" Namespace="calico-system" Pod="calico-kube-controllers-fcfbfb698-xmdzd" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--fcfbfb698--xmdzd-eth0" Mar 12 01:39:18.410936 containerd[1457]: 2026-03-12 01:39:17.897 [INFO][4034] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ca2b1ac5926a93ecdd2dbaa3ed04a9cae2fca495abd75c0700f066e66a654fa8" HandleID="k8s-pod-network.ca2b1ac5926a93ecdd2dbaa3ed04a9cae2fca495abd75c0700f066e66a654fa8" Workload="localhost-k8s-calico--kube--controllers--fcfbfb698--xmdzd-eth0" Mar 12 01:39:18.410936 containerd[1457]: 2026-03-12 01:39:17.907 [INFO][4034] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="ca2b1ac5926a93ecdd2dbaa3ed04a9cae2fca495abd75c0700f066e66a654fa8" HandleID="k8s-pod-network.ca2b1ac5926a93ecdd2dbaa3ed04a9cae2fca495abd75c0700f066e66a654fa8" Workload="localhost-k8s-calico--kube--controllers--fcfbfb698--xmdzd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00028bb60), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-fcfbfb698-xmdzd", "timestamp":"2026-03-12 01:39:17.897284931 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc00062eb00)} Mar 12 01:39:18.410936 containerd[1457]: 2026-03-12 01:39:17.907 [INFO][4034] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 01:39:18.410936 containerd[1457]: 2026-03-12 01:39:18.169 [INFO][4034] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 01:39:18.410936 containerd[1457]: 2026-03-12 01:39:18.169 [INFO][4034] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 12 01:39:18.410936 containerd[1457]: 2026-03-12 01:39:18.185 [INFO][4034] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.ca2b1ac5926a93ecdd2dbaa3ed04a9cae2fca495abd75c0700f066e66a654fa8" host="localhost" Mar 12 01:39:18.410936 containerd[1457]: 2026-03-12 01:39:18.213 [INFO][4034] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 12 01:39:18.410936 containerd[1457]: 2026-03-12 01:39:18.234 [INFO][4034] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 12 01:39:18.410936 containerd[1457]: 2026-03-12 01:39:18.244 [INFO][4034] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 12 01:39:18.410936 containerd[1457]: 2026-03-12 01:39:18.252 [INFO][4034] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 12 01:39:18.410936 containerd[1457]: 2026-03-12 01:39:18.252 [INFO][4034] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.ca2b1ac5926a93ecdd2dbaa3ed04a9cae2fca495abd75c0700f066e66a654fa8" host="localhost" Mar 12 01:39:18.410936 containerd[1457]: 2026-03-12 01:39:18.259 [INFO][4034] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.ca2b1ac5926a93ecdd2dbaa3ed04a9cae2fca495abd75c0700f066e66a654fa8 Mar 12 01:39:18.410936 containerd[1457]: 2026-03-12 01:39:18.270 [INFO][4034] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.ca2b1ac5926a93ecdd2dbaa3ed04a9cae2fca495abd75c0700f066e66a654fa8" host="localhost" Mar 12 01:39:18.410936 containerd[1457]: 2026-03-12 01:39:18.282 [INFO][4034] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.ca2b1ac5926a93ecdd2dbaa3ed04a9cae2fca495abd75c0700f066e66a654fa8" host="localhost" Mar 12 01:39:18.410936 containerd[1457]: 2026-03-12 01:39:18.283 [INFO][4034] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.ca2b1ac5926a93ecdd2dbaa3ed04a9cae2fca495abd75c0700f066e66a654fa8" host="localhost" Mar 12 01:39:18.410936 containerd[1457]: 2026-03-12 01:39:18.284 [INFO][4034] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 01:39:18.410936 containerd[1457]: 2026-03-12 01:39:18.284 [INFO][4034] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="ca2b1ac5926a93ecdd2dbaa3ed04a9cae2fca495abd75c0700f066e66a654fa8" HandleID="k8s-pod-network.ca2b1ac5926a93ecdd2dbaa3ed04a9cae2fca495abd75c0700f066e66a654fa8" Workload="localhost-k8s-calico--kube--controllers--fcfbfb698--xmdzd-eth0" Mar 12 01:39:18.412503 containerd[1457]: 2026-03-12 01:39:18.299 [INFO][3954] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ca2b1ac5926a93ecdd2dbaa3ed04a9cae2fca495abd75c0700f066e66a654fa8" Namespace="calico-system" Pod="calico-kube-controllers-fcfbfb698-xmdzd" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--fcfbfb698--xmdzd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--fcfbfb698--xmdzd-eth0", GenerateName:"calico-kube-controllers-fcfbfb698-", Namespace:"calico-system", SelfLink:"", UID:"59eed31b-e704-4270-b07c-3c68fa6fc47c", ResourceVersion:"953", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 1, 39, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"fcfbfb698", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-fcfbfb698-xmdzd", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliccd75f62398", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 01:39:18.412503 containerd[1457]: 2026-03-12 01:39:18.300 [INFO][3954] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="ca2b1ac5926a93ecdd2dbaa3ed04a9cae2fca495abd75c0700f066e66a654fa8" Namespace="calico-system" Pod="calico-kube-controllers-fcfbfb698-xmdzd" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--fcfbfb698--xmdzd-eth0" Mar 12 01:39:18.412503 containerd[1457]: 2026-03-12 01:39:18.300 [INFO][3954] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliccd75f62398 ContainerID="ca2b1ac5926a93ecdd2dbaa3ed04a9cae2fca495abd75c0700f066e66a654fa8" Namespace="calico-system" Pod="calico-kube-controllers-fcfbfb698-xmdzd" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--fcfbfb698--xmdzd-eth0" Mar 12 01:39:18.412503 containerd[1457]: 2026-03-12 01:39:18.363 [INFO][3954] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ca2b1ac5926a93ecdd2dbaa3ed04a9cae2fca495abd75c0700f066e66a654fa8" Namespace="calico-system" Pod="calico-kube-controllers-fcfbfb698-xmdzd" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--fcfbfb698--xmdzd-eth0" Mar 12 01:39:18.412503 containerd[1457]: 2026-03-12 01:39:18.364 [INFO][3954] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ca2b1ac5926a93ecdd2dbaa3ed04a9cae2fca495abd75c0700f066e66a654fa8" Namespace="calico-system" Pod="calico-kube-controllers-fcfbfb698-xmdzd" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--fcfbfb698--xmdzd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--fcfbfb698--xmdzd-eth0", GenerateName:"calico-kube-controllers-fcfbfb698-", Namespace:"calico-system", SelfLink:"", UID:"59eed31b-e704-4270-b07c-3c68fa6fc47c", ResourceVersion:"953", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 1, 39, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"fcfbfb698", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ca2b1ac5926a93ecdd2dbaa3ed04a9cae2fca495abd75c0700f066e66a654fa8", Pod:"calico-kube-controllers-fcfbfb698-xmdzd", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliccd75f62398", MAC:"be:e5:87:28:58:29", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 01:39:18.412503 containerd[1457]: 2026-03-12 01:39:18.400 [INFO][3954] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ca2b1ac5926a93ecdd2dbaa3ed04a9cae2fca495abd75c0700f066e66a654fa8" Namespace="calico-system" Pod="calico-kube-controllers-fcfbfb698-xmdzd" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--fcfbfb698--xmdzd-eth0" Mar 12 01:39:18.422267 containerd[1457]: time="2026-03-12T01:39:18.422022677Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-r96sx,Uid:4350a8ed-9db0-4145-8365-af9918373d13,Namespace:calico-system,Attempt:1,} returns sandbox id \"fdbba97a6d0cde850a9fef406712cddf2684ff0e6ee4fbf9ca5acec500820734\"" Mar 12 01:39:18.423732 systemd[1]: Created slice kubepods-besteffort-pod5c550c55_3b51_4254_931f_62c41a1525a5.slice - libcontainer container kubepods-besteffort-pod5c550c55_3b51_4254_931f_62c41a1525a5.slice. Mar 12 01:39:18.463497 kubelet[2511]: I0312 01:39:18.461277 2511 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5c550c55-3b51-4254-931f-62c41a1525a5-whisker-ca-bundle\") pod \"whisker-9c69bdc4d-wmm4h\" (UID: \"5c550c55-3b51-4254-931f-62c41a1525a5\") " pod="calico-system/whisker-9c69bdc4d-wmm4h" Mar 12 01:39:18.467224 kubelet[2511]: I0312 01:39:18.463364 2511 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/5c550c55-3b51-4254-931f-62c41a1525a5-nginx-config\") pod \"whisker-9c69bdc4d-wmm4h\" (UID: \"5c550c55-3b51-4254-931f-62c41a1525a5\") " pod="calico-system/whisker-9c69bdc4d-wmm4h" Mar 12 01:39:18.467224 kubelet[2511]: I0312 01:39:18.464220 2511 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/5c550c55-3b51-4254-931f-62c41a1525a5-whisker-backend-key-pair\") pod \"whisker-9c69bdc4d-wmm4h\" (UID: \"5c550c55-3b51-4254-931f-62c41a1525a5\") " pod="calico-system/whisker-9c69bdc4d-wmm4h" Mar 12 01:39:18.467224 kubelet[2511]: I0312 01:39:18.464633 2511 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jwsj2\" (UniqueName: \"kubernetes.io/projected/5c550c55-3b51-4254-931f-62c41a1525a5-kube-api-access-jwsj2\") pod \"whisker-9c69bdc4d-wmm4h\" (UID: \"5c550c55-3b51-4254-931f-62c41a1525a5\") " pod="calico-system/whisker-9c69bdc4d-wmm4h" Mar 12 01:39:18.466846 systemd[1]: Started cri-containerd-e4f86eb47678149c76c84761879e832969c0b3cddd2c6e87923185ef2870e6ab.scope - libcontainer container e4f86eb47678149c76c84761879e832969c0b3cddd2c6e87923185ef2870e6ab. Mar 12 01:39:18.488612 containerd[1457]: time="2026-03-12T01:39:18.488166678Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 12 01:39:18.488612 containerd[1457]: time="2026-03-12T01:39:18.488340178Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 12 01:39:18.488612 containerd[1457]: time="2026-03-12T01:39:18.488414933Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 01:39:18.493040 containerd[1457]: time="2026-03-12T01:39:18.492812659Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 01:39:18.493394 systemd-resolved[1385]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 12 01:39:18.504433 systemd-networkd[1381]: cali0c7849a2a56: Link UP Mar 12 01:39:18.505789 systemd-networkd[1381]: cali0c7849a2a56: Gained carrier Mar 12 01:39:18.567595 containerd[1457]: 2026-03-12 01:39:17.798 [ERROR][3986] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 12 01:39:18.567595 containerd[1457]: 2026-03-12 01:39:17.825 [INFO][3986] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7d764666f9--pjnrc-eth0 coredns-7d764666f9- kube-system 6ac744d1-2bf6-4481-882f-f786a1600883 954 0 2026-03-12 01:38:52 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7d764666f9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7d764666f9-pjnrc eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali0c7849a2a56 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="3e475dde2175831a76a096f4c5d3a55ca2e0ba7b7544ecc78b4c9c2ad21d26c1" Namespace="kube-system" Pod="coredns-7d764666f9-pjnrc" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--pjnrc-" Mar 12 01:39:18.567595 containerd[1457]: 2026-03-12 01:39:17.825 [INFO][3986] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3e475dde2175831a76a096f4c5d3a55ca2e0ba7b7544ecc78b4c9c2ad21d26c1" Namespace="kube-system" Pod="coredns-7d764666f9-pjnrc" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--pjnrc-eth0" Mar 12 01:39:18.567595 containerd[1457]: 2026-03-12 01:39:17.915 [INFO][4050] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3e475dde2175831a76a096f4c5d3a55ca2e0ba7b7544ecc78b4c9c2ad21d26c1" HandleID="k8s-pod-network.3e475dde2175831a76a096f4c5d3a55ca2e0ba7b7544ecc78b4c9c2ad21d26c1" Workload="localhost-k8s-coredns--7d764666f9--pjnrc-eth0" Mar 12 01:39:18.567595 containerd[1457]: 2026-03-12 01:39:17.923 [INFO][4050] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="3e475dde2175831a76a096f4c5d3a55ca2e0ba7b7544ecc78b4c9c2ad21d26c1" HandleID="k8s-pod-network.3e475dde2175831a76a096f4c5d3a55ca2e0ba7b7544ecc78b4c9c2ad21d26c1" Workload="localhost-k8s-coredns--7d764666f9--pjnrc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000277cc0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7d764666f9-pjnrc", "timestamp":"2026-03-12 01:39:17.915227529 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc00022db80)} Mar 12 01:39:18.567595 containerd[1457]: 2026-03-12 01:39:17.924 [INFO][4050] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 01:39:18.567595 containerd[1457]: 2026-03-12 01:39:18.284 [INFO][4050] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 01:39:18.567595 containerd[1457]: 2026-03-12 01:39:18.285 [INFO][4050] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 12 01:39:18.567595 containerd[1457]: 2026-03-12 01:39:18.302 [INFO][4050] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.3e475dde2175831a76a096f4c5d3a55ca2e0ba7b7544ecc78b4c9c2ad21d26c1" host="localhost" Mar 12 01:39:18.567595 containerd[1457]: 2026-03-12 01:39:18.333 [INFO][4050] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 12 01:39:18.567595 containerd[1457]: 2026-03-12 01:39:18.370 [INFO][4050] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 12 01:39:18.567595 containerd[1457]: 2026-03-12 01:39:18.384 [INFO][4050] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 12 01:39:18.567595 containerd[1457]: 2026-03-12 01:39:18.401 [INFO][4050] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 12 01:39:18.567595 containerd[1457]: 2026-03-12 01:39:18.402 [INFO][4050] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.3e475dde2175831a76a096f4c5d3a55ca2e0ba7b7544ecc78b4c9c2ad21d26c1" host="localhost" Mar 12 01:39:18.567595 containerd[1457]: 2026-03-12 01:39:18.413 [INFO][4050] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.3e475dde2175831a76a096f4c5d3a55ca2e0ba7b7544ecc78b4c9c2ad21d26c1 Mar 12 01:39:18.567595 containerd[1457]: 2026-03-12 01:39:18.467 [INFO][4050] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.3e475dde2175831a76a096f4c5d3a55ca2e0ba7b7544ecc78b4c9c2ad21d26c1" host="localhost" Mar 12 01:39:18.567595 containerd[1457]: 2026-03-12 01:39:18.491 [INFO][4050] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.3e475dde2175831a76a096f4c5d3a55ca2e0ba7b7544ecc78b4c9c2ad21d26c1" host="localhost" Mar 12 01:39:18.567595 containerd[1457]: 2026-03-12 01:39:18.492 [INFO][4050] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.3e475dde2175831a76a096f4c5d3a55ca2e0ba7b7544ecc78b4c9c2ad21d26c1" host="localhost" Mar 12 01:39:18.567595 containerd[1457]: 2026-03-12 01:39:18.492 [INFO][4050] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 01:39:18.567595 containerd[1457]: 2026-03-12 01:39:18.492 [INFO][4050] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="3e475dde2175831a76a096f4c5d3a55ca2e0ba7b7544ecc78b4c9c2ad21d26c1" HandleID="k8s-pod-network.3e475dde2175831a76a096f4c5d3a55ca2e0ba7b7544ecc78b4c9c2ad21d26c1" Workload="localhost-k8s-coredns--7d764666f9--pjnrc-eth0" Mar 12 01:39:18.568359 containerd[1457]: 2026-03-12 01:39:18.500 [INFO][3986] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3e475dde2175831a76a096f4c5d3a55ca2e0ba7b7544ecc78b4c9c2ad21d26c1" Namespace="kube-system" Pod="coredns-7d764666f9-pjnrc" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--pjnrc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7d764666f9--pjnrc-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"6ac744d1-2bf6-4481-882f-f786a1600883", ResourceVersion:"954", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 1, 38, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7d764666f9-pjnrc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0c7849a2a56", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 01:39:18.568359 containerd[1457]: 2026-03-12 01:39:18.500 [INFO][3986] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="3e475dde2175831a76a096f4c5d3a55ca2e0ba7b7544ecc78b4c9c2ad21d26c1" Namespace="kube-system" Pod="coredns-7d764666f9-pjnrc" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--pjnrc-eth0" Mar 12 01:39:18.568359 containerd[1457]: 2026-03-12 01:39:18.501 [INFO][3986] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0c7849a2a56 ContainerID="3e475dde2175831a76a096f4c5d3a55ca2e0ba7b7544ecc78b4c9c2ad21d26c1" Namespace="kube-system" Pod="coredns-7d764666f9-pjnrc" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--pjnrc-eth0" Mar 12 01:39:18.568359 containerd[1457]: 2026-03-12 01:39:18.508 [INFO][3986] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3e475dde2175831a76a096f4c5d3a55ca2e0ba7b7544ecc78b4c9c2ad21d26c1" Namespace="kube-system" Pod="coredns-7d764666f9-pjnrc" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--pjnrc-eth0" Mar 12 01:39:18.568359 containerd[1457]: 2026-03-12 01:39:18.510 [INFO][3986] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3e475dde2175831a76a096f4c5d3a55ca2e0ba7b7544ecc78b4c9c2ad21d26c1" Namespace="kube-system" Pod="coredns-7d764666f9-pjnrc" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--pjnrc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7d764666f9--pjnrc-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"6ac744d1-2bf6-4481-882f-f786a1600883", ResourceVersion:"954", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 1, 38, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3e475dde2175831a76a096f4c5d3a55ca2e0ba7b7544ecc78b4c9c2ad21d26c1", Pod:"coredns-7d764666f9-pjnrc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0c7849a2a56", MAC:"36:a7:33:5a:75:65", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 01:39:18.568359 containerd[1457]: 2026-03-12 01:39:18.537 [INFO][3986] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3e475dde2175831a76a096f4c5d3a55ca2e0ba7b7544ecc78b4c9c2ad21d26c1" Namespace="kube-system" Pod="coredns-7d764666f9-pjnrc" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--pjnrc-eth0" Mar 12 01:39:18.569826 systemd[1]: Started cri-containerd-ca2b1ac5926a93ecdd2dbaa3ed04a9cae2fca495abd75c0700f066e66a654fa8.scope - libcontainer container ca2b1ac5926a93ecdd2dbaa3ed04a9cae2fca495abd75c0700f066e66a654fa8. Mar 12 01:39:18.599387 containerd[1457]: time="2026-03-12T01:39:18.599107976Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-tp6j8,Uid:0da94b48-bfde-434e-b905-92be304e2a09,Namespace:kube-system,Attempt:1,} returns sandbox id \"e4f86eb47678149c76c84761879e832969c0b3cddd2c6e87923185ef2870e6ab\"" Mar 12 01:39:18.600894 kubelet[2511]: E0312 01:39:18.600606 2511 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:39:18.615432 containerd[1457]: time="2026-03-12T01:39:18.615094033Z" level=info msg="CreateContainer within sandbox \"e4f86eb47678149c76c84761879e832969c0b3cddd2c6e87923185ef2870e6ab\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 12 01:39:18.641415 containerd[1457]: time="2026-03-12T01:39:18.641216768Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 12 01:39:18.641867 containerd[1457]: time="2026-03-12T01:39:18.641630109Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 12 01:39:18.641949 containerd[1457]: time="2026-03-12T01:39:18.641922742Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 01:39:18.642195 containerd[1457]: time="2026-03-12T01:39:18.642169085Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 01:39:18.642385 systemd-networkd[1381]: cali1d4257d396b: Link UP Mar 12 01:39:18.643596 systemd-networkd[1381]: cali1d4257d396b: Gained carrier Mar 12 01:39:18.661347 containerd[1457]: time="2026-03-12T01:39:18.661259780Z" level=info msg="CreateContainer within sandbox \"e4f86eb47678149c76c84761879e832969c0b3cddd2c6e87923185ef2870e6ab\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"0fba5cc3c5d66e9742d5eaa7d9e7280c82274725cf5dea5cf6569e672e41bf11\"" Mar 12 01:39:18.672743 containerd[1457]: time="2026-03-12T01:39:18.672536338Z" level=info msg="StartContainer for \"0fba5cc3c5d66e9742d5eaa7d9e7280c82274725cf5dea5cf6569e672e41bf11\"" Mar 12 01:39:18.676106 systemd-resolved[1385]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 12 01:39:18.682371 containerd[1457]: 2026-03-12 01:39:17.862 [ERROR][4000] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 12 01:39:18.682371 containerd[1457]: 2026-03-12 01:39:17.894 [INFO][4000] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--88f858578--8cj8d-eth0 calico-apiserver-88f858578- calico-system ce2d517a-d1f9-487b-8064-201ed4846645 956 0 2026-03-12 01:39:01 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:88f858578 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-88f858578-8cj8d eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali1d4257d396b [] [] }} ContainerID="81d2fea9c77f3d675ab5c6cfd6badf6b6408c5d8a6c9496035076d95c6af5591" Namespace="calico-system" Pod="calico-apiserver-88f858578-8cj8d" WorkloadEndpoint="localhost-k8s-calico--apiserver--88f858578--8cj8d-" Mar 12 01:39:18.682371 containerd[1457]: 2026-03-12 01:39:17.894 [INFO][4000] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="81d2fea9c77f3d675ab5c6cfd6badf6b6408c5d8a6c9496035076d95c6af5591" Namespace="calico-system" Pod="calico-apiserver-88f858578-8cj8d" WorkloadEndpoint="localhost-k8s-calico--apiserver--88f858578--8cj8d-eth0" Mar 12 01:39:18.682371 containerd[1457]: 2026-03-12 01:39:17.945 [INFO][4070] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="81d2fea9c77f3d675ab5c6cfd6badf6b6408c5d8a6c9496035076d95c6af5591" HandleID="k8s-pod-network.81d2fea9c77f3d675ab5c6cfd6badf6b6408c5d8a6c9496035076d95c6af5591" Workload="localhost-k8s-calico--apiserver--88f858578--8cj8d-eth0" Mar 12 01:39:18.682371 containerd[1457]: 2026-03-12 01:39:17.969 [INFO][4070] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="81d2fea9c77f3d675ab5c6cfd6badf6b6408c5d8a6c9496035076d95c6af5591" HandleID="k8s-pod-network.81d2fea9c77f3d675ab5c6cfd6badf6b6408c5d8a6c9496035076d95c6af5591" Workload="localhost-k8s-calico--apiserver--88f858578--8cj8d-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000382260), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-apiserver-88f858578-8cj8d", "timestamp":"2026-03-12 01:39:17.945114022 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0007942c0)} Mar 12 01:39:18.682371 containerd[1457]: 2026-03-12 01:39:17.969 [INFO][4070] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 01:39:18.682371 containerd[1457]: 2026-03-12 01:39:18.494 [INFO][4070] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 01:39:18.682371 containerd[1457]: 2026-03-12 01:39:18.494 [INFO][4070] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 12 01:39:18.682371 containerd[1457]: 2026-03-12 01:39:18.503 [INFO][4070] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.81d2fea9c77f3d675ab5c6cfd6badf6b6408c5d8a6c9496035076d95c6af5591" host="localhost" Mar 12 01:39:18.682371 containerd[1457]: 2026-03-12 01:39:18.544 [INFO][4070] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 12 01:39:18.682371 containerd[1457]: 2026-03-12 01:39:18.562 [INFO][4070] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 12 01:39:18.682371 containerd[1457]: 2026-03-12 01:39:18.577 [INFO][4070] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 12 01:39:18.682371 containerd[1457]: 2026-03-12 01:39:18.590 [INFO][4070] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 12 01:39:18.682371 containerd[1457]: 2026-03-12 01:39:18.591 [INFO][4070] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.81d2fea9c77f3d675ab5c6cfd6badf6b6408c5d8a6c9496035076d95c6af5591" host="localhost" Mar 12 01:39:18.682371 containerd[1457]: 2026-03-12 01:39:18.595 [INFO][4070] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.81d2fea9c77f3d675ab5c6cfd6badf6b6408c5d8a6c9496035076d95c6af5591 Mar 12 01:39:18.682371 containerd[1457]: 2026-03-12 01:39:18.605 [INFO][4070] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.81d2fea9c77f3d675ab5c6cfd6badf6b6408c5d8a6c9496035076d95c6af5591" host="localhost" Mar 12 01:39:18.682371 containerd[1457]: 2026-03-12 01:39:18.619 [INFO][4070] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.81d2fea9c77f3d675ab5c6cfd6badf6b6408c5d8a6c9496035076d95c6af5591" host="localhost" Mar 12 01:39:18.682371 containerd[1457]: 2026-03-12 01:39:18.620 [INFO][4070] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.81d2fea9c77f3d675ab5c6cfd6badf6b6408c5d8a6c9496035076d95c6af5591" host="localhost" Mar 12 01:39:18.682371 containerd[1457]: 2026-03-12 01:39:18.620 [INFO][4070] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 01:39:18.682371 containerd[1457]: 2026-03-12 01:39:18.620 [INFO][4070] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="81d2fea9c77f3d675ab5c6cfd6badf6b6408c5d8a6c9496035076d95c6af5591" HandleID="k8s-pod-network.81d2fea9c77f3d675ab5c6cfd6badf6b6408c5d8a6c9496035076d95c6af5591" Workload="localhost-k8s-calico--apiserver--88f858578--8cj8d-eth0" Mar 12 01:39:18.683138 containerd[1457]: 2026-03-12 01:39:18.630 [INFO][4000] cni-plugin/k8s.go 418: Populated endpoint ContainerID="81d2fea9c77f3d675ab5c6cfd6badf6b6408c5d8a6c9496035076d95c6af5591" Namespace="calico-system" Pod="calico-apiserver-88f858578-8cj8d" WorkloadEndpoint="localhost-k8s-calico--apiserver--88f858578--8cj8d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--88f858578--8cj8d-eth0", GenerateName:"calico-apiserver-88f858578-", Namespace:"calico-system", SelfLink:"", UID:"ce2d517a-d1f9-487b-8064-201ed4846645", ResourceVersion:"956", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 1, 39, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"88f858578", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-88f858578-8cj8d", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali1d4257d396b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 01:39:18.683138 containerd[1457]: 2026-03-12 01:39:18.630 [INFO][4000] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="81d2fea9c77f3d675ab5c6cfd6badf6b6408c5d8a6c9496035076d95c6af5591" Namespace="calico-system" Pod="calico-apiserver-88f858578-8cj8d" WorkloadEndpoint="localhost-k8s-calico--apiserver--88f858578--8cj8d-eth0" Mar 12 01:39:18.683138 containerd[1457]: 2026-03-12 01:39:18.630 [INFO][4000] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1d4257d396b ContainerID="81d2fea9c77f3d675ab5c6cfd6badf6b6408c5d8a6c9496035076d95c6af5591" Namespace="calico-system" Pod="calico-apiserver-88f858578-8cj8d" WorkloadEndpoint="localhost-k8s-calico--apiserver--88f858578--8cj8d-eth0" Mar 12 01:39:18.683138 containerd[1457]: 2026-03-12 01:39:18.645 [INFO][4000] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="81d2fea9c77f3d675ab5c6cfd6badf6b6408c5d8a6c9496035076d95c6af5591" Namespace="calico-system" Pod="calico-apiserver-88f858578-8cj8d" WorkloadEndpoint="localhost-k8s-calico--apiserver--88f858578--8cj8d-eth0" Mar 12 01:39:18.683138 containerd[1457]: 2026-03-12 01:39:18.649 [INFO][4000] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="81d2fea9c77f3d675ab5c6cfd6badf6b6408c5d8a6c9496035076d95c6af5591" Namespace="calico-system" Pod="calico-apiserver-88f858578-8cj8d" WorkloadEndpoint="localhost-k8s-calico--apiserver--88f858578--8cj8d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--88f858578--8cj8d-eth0", GenerateName:"calico-apiserver-88f858578-", Namespace:"calico-system", SelfLink:"", UID:"ce2d517a-d1f9-487b-8064-201ed4846645", ResourceVersion:"956", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 1, 39, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"88f858578", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"81d2fea9c77f3d675ab5c6cfd6badf6b6408c5d8a6c9496035076d95c6af5591", Pod:"calico-apiserver-88f858578-8cj8d", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali1d4257d396b", MAC:"f6:e1:5a:d0:e5:ff", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 01:39:18.683138 containerd[1457]: 2026-03-12 01:39:18.671 [INFO][4000] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="81d2fea9c77f3d675ab5c6cfd6badf6b6408c5d8a6c9496035076d95c6af5591" Namespace="calico-system" Pod="calico-apiserver-88f858578-8cj8d" WorkloadEndpoint="localhost-k8s-calico--apiserver--88f858578--8cj8d-eth0" Mar 12 01:39:18.710483 systemd[1]: Started cri-containerd-3e475dde2175831a76a096f4c5d3a55ca2e0ba7b7544ecc78b4c9c2ad21d26c1.scope - libcontainer container 3e475dde2175831a76a096f4c5d3a55ca2e0ba7b7544ecc78b4c9c2ad21d26c1. Mar 12 01:39:18.738069 containerd[1457]: time="2026-03-12T01:39:18.737936043Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-9c69bdc4d-wmm4h,Uid:5c550c55-3b51-4254-931f-62c41a1525a5,Namespace:calico-system,Attempt:0,}" Mar 12 01:39:18.743829 systemd[1]: Started cri-containerd-0fba5cc3c5d66e9742d5eaa7d9e7280c82274725cf5dea5cf6569e672e41bf11.scope - libcontainer container 0fba5cc3c5d66e9742d5eaa7d9e7280c82274725cf5dea5cf6569e672e41bf11. Mar 12 01:39:18.756103 systemd-resolved[1385]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 12 01:39:18.811707 containerd[1457]: time="2026-03-12T01:39:18.811486893Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-fcfbfb698-xmdzd,Uid:59eed31b-e704-4270-b07c-3c68fa6fc47c,Namespace:calico-system,Attempt:1,} returns sandbox id \"ca2b1ac5926a93ecdd2dbaa3ed04a9cae2fca495abd75c0700f066e66a654fa8\"" Mar 12 01:39:18.819001 containerd[1457]: time="2026-03-12T01:39:18.817632439Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 12 01:39:18.819001 containerd[1457]: time="2026-03-12T01:39:18.817768796Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 12 01:39:18.819001 containerd[1457]: time="2026-03-12T01:39:18.817782884Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 01:39:18.828901 containerd[1457]: time="2026-03-12T01:39:18.828736762Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 01:39:18.857256 containerd[1457]: time="2026-03-12T01:39:18.857222136Z" level=info msg="StartContainer for \"0fba5cc3c5d66e9742d5eaa7d9e7280c82274725cf5dea5cf6569e672e41bf11\" returns successfully" Mar 12 01:39:18.870354 containerd[1457]: time="2026-03-12T01:39:18.867212845Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-pjnrc,Uid:6ac744d1-2bf6-4481-882f-f786a1600883,Namespace:kube-system,Attempt:1,} returns sandbox id \"3e475dde2175831a76a096f4c5d3a55ca2e0ba7b7544ecc78b4c9c2ad21d26c1\"" Mar 12 01:39:18.876376 kubelet[2511]: E0312 01:39:18.876352 2511 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:39:18.887864 systemd[1]: Started cri-containerd-81d2fea9c77f3d675ab5c6cfd6badf6b6408c5d8a6c9496035076d95c6af5591.scope - libcontainer container 81d2fea9c77f3d675ab5c6cfd6badf6b6408c5d8a6c9496035076d95c6af5591. Mar 12 01:39:18.890461 containerd[1457]: time="2026-03-12T01:39:18.890394729Z" level=info msg="CreateContainer within sandbox \"3e475dde2175831a76a096f4c5d3a55ca2e0ba7b7544ecc78b4c9c2ad21d26c1\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 12 01:39:18.915122 containerd[1457]: time="2026-03-12T01:39:18.915049920Z" level=info msg="CreateContainer within sandbox \"3e475dde2175831a76a096f4c5d3a55ca2e0ba7b7544ecc78b4c9c2ad21d26c1\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"2cf76592ae7bef3aa9ad33ce1ff496709cfc7fa40a3d88170f42791790a3ce9d\"" Mar 12 01:39:18.942336 containerd[1457]: time="2026-03-12T01:39:18.940519887Z" level=info msg="StartContainer for \"2cf76592ae7bef3aa9ad33ce1ff496709cfc7fa40a3d88170f42791790a3ce9d\"" Mar 12 01:39:18.990920 systemd-resolved[1385]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 12 01:39:19.029736 kernel: calico-node[4135]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Mar 12 01:39:19.136870 systemd-networkd[1381]: califb758cf90b7: Gained IPv6LL Mar 12 01:39:19.140102 kubelet[2511]: I0312 01:39:19.139481 2511 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="1d5c8986-3e41-46b9-9e79-5445416fc70d" path="/var/lib/kubelet/pods/1d5c8986-3e41-46b9-9e79-5445416fc70d/volumes" Mar 12 01:39:19.177854 systemd[1]: Started cri-containerd-2cf76592ae7bef3aa9ad33ce1ff496709cfc7fa40a3d88170f42791790a3ce9d.scope - libcontainer container 2cf76592ae7bef3aa9ad33ce1ff496709cfc7fa40a3d88170f42791790a3ce9d. Mar 12 01:39:19.184911 containerd[1457]: time="2026-03-12T01:39:19.184807285Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-88f858578-8cj8d,Uid:ce2d517a-d1f9-487b-8064-201ed4846645,Namespace:calico-system,Attempt:1,} returns sandbox id \"81d2fea9c77f3d675ab5c6cfd6badf6b6408c5d8a6c9496035076d95c6af5591\"" Mar 12 01:39:19.200310 systemd-networkd[1381]: califca6a1ae899: Gained IPv6LL Mar 12 01:39:19.301802 containerd[1457]: time="2026-03-12T01:39:19.301563522Z" level=info msg="StartContainer for \"2cf76592ae7bef3aa9ad33ce1ff496709cfc7fa40a3d88170f42791790a3ce9d\" returns successfully" Mar 12 01:39:19.320595 kubelet[2511]: E0312 01:39:19.320501 2511 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:39:19.373942 kubelet[2511]: E0312 01:39:19.373887 2511 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:39:19.445290 kubelet[2511]: I0312 01:39:19.444995 2511 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-tp6j8" podStartSLOduration=27.444982553 podStartE2EDuration="27.444982553s" podCreationTimestamp="2026-03-12 01:38:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 01:39:19.442279177 +0000 UTC m=+34.595673866" watchObservedRunningTime="2026-03-12 01:39:19.444982553 +0000 UTC m=+34.598377244" Mar 12 01:39:19.471739 kubelet[2511]: I0312 01:39:19.471462 2511 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-pjnrc" podStartSLOduration=27.471441135 podStartE2EDuration="27.471441135s" podCreationTimestamp="2026-03-12 01:38:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 01:39:19.464769746 +0000 UTC m=+34.618164437" watchObservedRunningTime="2026-03-12 01:39:19.471441135 +0000 UTC m=+34.624835825" Mar 12 01:39:19.713905 systemd-networkd[1381]: cali1d4257d396b: Gained IPv6LL Mar 12 01:39:19.782239 systemd-networkd[1381]: caliccd75f62398: Gained IPv6LL Mar 12 01:39:20.107327 systemd-networkd[1381]: calid5d58159905: Gained IPv6LL Mar 12 01:39:20.159282 systemd-networkd[1381]: cali0bdf599797e: Gained IPv6LL Mar 12 01:39:20.338358 systemd-networkd[1381]: calif918b08a7ad: Link UP Mar 12 01:39:20.341182 systemd-networkd[1381]: calif918b08a7ad: Gained carrier Mar 12 01:39:20.352582 systemd-networkd[1381]: cali0c7849a2a56: Gained IPv6LL Mar 12 01:39:20.369856 containerd[1457]: 2026-03-12 01:39:19.050 [INFO][4546] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--9c69bdc4d--wmm4h-eth0 whisker-9c69bdc4d- calico-system 5c550c55-3b51-4254-931f-62c41a1525a5 999 0 2026-03-12 01:39:18 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:9c69bdc4d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-9c69bdc4d-wmm4h eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calif918b08a7ad [] [] }} ContainerID="0b02333e98ad2c1c30f7c94310237f0b5b1304c24325dd9a12d071f3c349c8f0" Namespace="calico-system" Pod="whisker-9c69bdc4d-wmm4h" WorkloadEndpoint="localhost-k8s-whisker--9c69bdc4d--wmm4h-" Mar 12 01:39:20.369856 containerd[1457]: 2026-03-12 01:39:19.060 [INFO][4546] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0b02333e98ad2c1c30f7c94310237f0b5b1304c24325dd9a12d071f3c349c8f0" Namespace="calico-system" Pod="whisker-9c69bdc4d-wmm4h" WorkloadEndpoint="localhost-k8s-whisker--9c69bdc4d--wmm4h-eth0" Mar 12 01:39:20.369856 containerd[1457]: 2026-03-12 01:39:19.241 [INFO][4643] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0b02333e98ad2c1c30f7c94310237f0b5b1304c24325dd9a12d071f3c349c8f0" HandleID="k8s-pod-network.0b02333e98ad2c1c30f7c94310237f0b5b1304c24325dd9a12d071f3c349c8f0" Workload="localhost-k8s-whisker--9c69bdc4d--wmm4h-eth0" Mar 12 01:39:20.369856 containerd[1457]: 2026-03-12 01:39:19.405 [INFO][4643] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="0b02333e98ad2c1c30f7c94310237f0b5b1304c24325dd9a12d071f3c349c8f0" HandleID="k8s-pod-network.0b02333e98ad2c1c30f7c94310237f0b5b1304c24325dd9a12d071f3c349c8f0" Workload="localhost-k8s-whisker--9c69bdc4d--wmm4h-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000389870), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-9c69bdc4d-wmm4h", "timestamp":"2026-03-12 01:39:19.24101906 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0005871e0)} Mar 12 01:39:20.369856 containerd[1457]: 2026-03-12 01:39:19.412 [INFO][4643] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 01:39:20.369856 containerd[1457]: 2026-03-12 01:39:19.412 [INFO][4643] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 01:39:20.369856 containerd[1457]: 2026-03-12 01:39:19.412 [INFO][4643] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 12 01:39:20.369856 containerd[1457]: 2026-03-12 01:39:19.434 [INFO][4643] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.0b02333e98ad2c1c30f7c94310237f0b5b1304c24325dd9a12d071f3c349c8f0" host="localhost" Mar 12 01:39:20.369856 containerd[1457]: 2026-03-12 01:39:19.455 [INFO][4643] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 12 01:39:20.369856 containerd[1457]: 2026-03-12 01:39:19.479 [INFO][4643] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 12 01:39:20.369856 containerd[1457]: 2026-03-12 01:39:19.645 [INFO][4643] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 12 01:39:20.369856 containerd[1457]: 2026-03-12 01:39:19.689 [INFO][4643] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 12 01:39:20.369856 containerd[1457]: 2026-03-12 01:39:19.689 [INFO][4643] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.0b02333e98ad2c1c30f7c94310237f0b5b1304c24325dd9a12d071f3c349c8f0" host="localhost" Mar 12 01:39:20.369856 containerd[1457]: 2026-03-12 01:39:19.844 [INFO][4643] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.0b02333e98ad2c1c30f7c94310237f0b5b1304c24325dd9a12d071f3c349c8f0 Mar 12 01:39:20.369856 containerd[1457]: 2026-03-12 01:39:20.055 [INFO][4643] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.0b02333e98ad2c1c30f7c94310237f0b5b1304c24325dd9a12d071f3c349c8f0" host="localhost" Mar 12 01:39:20.369856 containerd[1457]: 2026-03-12 01:39:20.244 [INFO][4643] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.0b02333e98ad2c1c30f7c94310237f0b5b1304c24325dd9a12d071f3c349c8f0" host="localhost" Mar 12 01:39:20.369856 containerd[1457]: 2026-03-12 01:39:20.248 [INFO][4643] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.0b02333e98ad2c1c30f7c94310237f0b5b1304c24325dd9a12d071f3c349c8f0" host="localhost" Mar 12 01:39:20.369856 containerd[1457]: 2026-03-12 01:39:20.256 [INFO][4643] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 01:39:20.369856 containerd[1457]: 2026-03-12 01:39:20.256 [INFO][4643] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="0b02333e98ad2c1c30f7c94310237f0b5b1304c24325dd9a12d071f3c349c8f0" HandleID="k8s-pod-network.0b02333e98ad2c1c30f7c94310237f0b5b1304c24325dd9a12d071f3c349c8f0" Workload="localhost-k8s-whisker--9c69bdc4d--wmm4h-eth0" Mar 12 01:39:20.370850 containerd[1457]: 2026-03-12 01:39:20.278 [INFO][4546] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0b02333e98ad2c1c30f7c94310237f0b5b1304c24325dd9a12d071f3c349c8f0" Namespace="calico-system" Pod="whisker-9c69bdc4d-wmm4h" WorkloadEndpoint="localhost-k8s-whisker--9c69bdc4d--wmm4h-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--9c69bdc4d--wmm4h-eth0", GenerateName:"whisker-9c69bdc4d-", Namespace:"calico-system", SelfLink:"", UID:"5c550c55-3b51-4254-931f-62c41a1525a5", ResourceVersion:"999", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 1, 39, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"9c69bdc4d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-9c69bdc4d-wmm4h", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calif918b08a7ad", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 01:39:20.370850 containerd[1457]: 2026-03-12 01:39:20.319 [INFO][4546] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="0b02333e98ad2c1c30f7c94310237f0b5b1304c24325dd9a12d071f3c349c8f0" Namespace="calico-system" Pod="whisker-9c69bdc4d-wmm4h" WorkloadEndpoint="localhost-k8s-whisker--9c69bdc4d--wmm4h-eth0" Mar 12 01:39:20.370850 containerd[1457]: 2026-03-12 01:39:20.328 [INFO][4546] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif918b08a7ad ContainerID="0b02333e98ad2c1c30f7c94310237f0b5b1304c24325dd9a12d071f3c349c8f0" Namespace="calico-system" Pod="whisker-9c69bdc4d-wmm4h" WorkloadEndpoint="localhost-k8s-whisker--9c69bdc4d--wmm4h-eth0" Mar 12 01:39:20.370850 containerd[1457]: 2026-03-12 01:39:20.342 [INFO][4546] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0b02333e98ad2c1c30f7c94310237f0b5b1304c24325dd9a12d071f3c349c8f0" Namespace="calico-system" Pod="whisker-9c69bdc4d-wmm4h" WorkloadEndpoint="localhost-k8s-whisker--9c69bdc4d--wmm4h-eth0" Mar 12 01:39:20.370850 containerd[1457]: 2026-03-12 01:39:20.344 [INFO][4546] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0b02333e98ad2c1c30f7c94310237f0b5b1304c24325dd9a12d071f3c349c8f0" Namespace="calico-system" Pod="whisker-9c69bdc4d-wmm4h" WorkloadEndpoint="localhost-k8s-whisker--9c69bdc4d--wmm4h-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--9c69bdc4d--wmm4h-eth0", GenerateName:"whisker-9c69bdc4d-", Namespace:"calico-system", SelfLink:"", UID:"5c550c55-3b51-4254-931f-62c41a1525a5", ResourceVersion:"999", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 1, 39, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"9c69bdc4d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0b02333e98ad2c1c30f7c94310237f0b5b1304c24325dd9a12d071f3c349c8f0", Pod:"whisker-9c69bdc4d-wmm4h", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calif918b08a7ad", MAC:"3e:9b:ed:b9:db:9d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 01:39:20.370850 containerd[1457]: 2026-03-12 01:39:20.362 [INFO][4546] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0b02333e98ad2c1c30f7c94310237f0b5b1304c24325dd9a12d071f3c349c8f0" Namespace="calico-system" Pod="whisker-9c69bdc4d-wmm4h" WorkloadEndpoint="localhost-k8s-whisker--9c69bdc4d--wmm4h-eth0" Mar 12 01:39:20.426755 containerd[1457]: time="2026-03-12T01:39:20.418476996Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 12 01:39:20.426755 containerd[1457]: time="2026-03-12T01:39:20.418533667Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 12 01:39:20.426755 containerd[1457]: time="2026-03-12T01:39:20.418543556Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 01:39:20.426755 containerd[1457]: time="2026-03-12T01:39:20.418628372Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 01:39:20.426939 kubelet[2511]: E0312 01:39:20.421129 2511 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:39:20.430338 kubelet[2511]: E0312 01:39:20.430271 2511 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:39:20.458084 systemd[1]: run-containerd-runc-k8s.io-0b02333e98ad2c1c30f7c94310237f0b5b1304c24325dd9a12d071f3c349c8f0-runc.OC7W1a.mount: Deactivated successfully. Mar 12 01:39:20.468010 systemd[1]: Started cri-containerd-0b02333e98ad2c1c30f7c94310237f0b5b1304c24325dd9a12d071f3c349c8f0.scope - libcontainer container 0b02333e98ad2c1c30f7c94310237f0b5b1304c24325dd9a12d071f3c349c8f0. Mar 12 01:39:20.509866 systemd-resolved[1385]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 12 01:39:20.547333 containerd[1457]: time="2026-03-12T01:39:20.547281185Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-9c69bdc4d-wmm4h,Uid:5c550c55-3b51-4254-931f-62c41a1525a5,Namespace:calico-system,Attempt:0,} returns sandbox id \"0b02333e98ad2c1c30f7c94310237f0b5b1304c24325dd9a12d071f3c349c8f0\"" Mar 12 01:39:20.684119 systemd-networkd[1381]: vxlan.calico: Link UP Mar 12 01:39:20.687137 systemd-networkd[1381]: vxlan.calico: Gained carrier Mar 12 01:39:21.187080 containerd[1457]: time="2026-03-12T01:39:21.187029282Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:39:21.188342 containerd[1457]: time="2026-03-12T01:39:21.188250118Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=48415780" Mar 12 01:39:21.189567 containerd[1457]: time="2026-03-12T01:39:21.189523231Z" level=info msg="ImageCreate event name:\"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:39:21.193004 containerd[1457]: time="2026-03-12T01:39:21.192914313Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:39:21.193527 containerd[1457]: time="2026-03-12T01:39:21.193426481Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 3.077288406s" Mar 12 01:39:21.193527 containerd[1457]: time="2026-03-12T01:39:21.193473233Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Mar 12 01:39:21.195379 containerd[1457]: time="2026-03-12T01:39:21.195339673Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\"" Mar 12 01:39:21.202338 containerd[1457]: time="2026-03-12T01:39:21.200893492Z" level=info msg="CreateContainer within sandbox \"6540db3d0a532d028cac914e0f3671a8e7743b2b65e6517aa1ecfb9220e155f2\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 12 01:39:21.217199 containerd[1457]: time="2026-03-12T01:39:21.217107049Z" level=info msg="CreateContainer within sandbox \"6540db3d0a532d028cac914e0f3671a8e7743b2b65e6517aa1ecfb9220e155f2\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"b87cc33ef92279a05e46ddf23db1dd1c19623c6a07562d512a8675eb0202cf3e\"" Mar 12 01:39:21.219245 containerd[1457]: time="2026-03-12T01:39:21.217920002Z" level=info msg="StartContainer for \"b87cc33ef92279a05e46ddf23db1dd1c19623c6a07562d512a8675eb0202cf3e\"" Mar 12 01:39:21.258919 systemd[1]: Started cri-containerd-b87cc33ef92279a05e46ddf23db1dd1c19623c6a07562d512a8675eb0202cf3e.scope - libcontainer container b87cc33ef92279a05e46ddf23db1dd1c19623c6a07562d512a8675eb0202cf3e. Mar 12 01:39:21.320298 containerd[1457]: time="2026-03-12T01:39:21.320206583Z" level=info msg="StartContainer for \"b87cc33ef92279a05e46ddf23db1dd1c19623c6a07562d512a8675eb0202cf3e\" returns successfully" Mar 12 01:39:21.426446 kubelet[2511]: E0312 01:39:21.423525 2511 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:39:21.426446 kubelet[2511]: E0312 01:39:21.423943 2511 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:39:21.949929 systemd-networkd[1381]: calif918b08a7ad: Gained IPv6LL Mar 12 01:39:22.013924 systemd-networkd[1381]: vxlan.calico: Gained IPv6LL Mar 12 01:39:22.268714 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3843262149.mount: Deactivated successfully. Mar 12 01:39:22.426138 kubelet[2511]: E0312 01:39:22.426104 2511 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:39:22.426687 kubelet[2511]: I0312 01:39:22.426581 2511 prober_manager.go:356] "Failed to trigger a manual run" probe="Readiness" Mar 12 01:39:22.766843 containerd[1457]: time="2026-03-12T01:39:22.766754574Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:39:22.767762 containerd[1457]: time="2026-03-12T01:39:22.767699146Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.31.4: active requests=0, bytes read=55623386" Mar 12 01:39:22.769427 containerd[1457]: time="2026-03-12T01:39:22.769372258Z" level=info msg="ImageCreate event name:\"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:39:22.774090 containerd[1457]: time="2026-03-12T01:39:22.774039860Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:39:22.774586 containerd[1457]: time="2026-03-12T01:39:22.774498153Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" with image id \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\", size \"55623232\" in 1.579112181s" Mar 12 01:39:22.774586 containerd[1457]: time="2026-03-12T01:39:22.774558490Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" returns image reference \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\"" Mar 12 01:39:22.776441 containerd[1457]: time="2026-03-12T01:39:22.776327721Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\"" Mar 12 01:39:22.781493 containerd[1457]: time="2026-03-12T01:39:22.781384094Z" level=info msg="CreateContainer within sandbox \"42784e68c4c63ea116f6fb50be158622a08aa07a8657f9914b0d288f3a78925d\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Mar 12 01:39:22.799231 containerd[1457]: time="2026-03-12T01:39:22.799177210Z" level=info msg="CreateContainer within sandbox \"42784e68c4c63ea116f6fb50be158622a08aa07a8657f9914b0d288f3a78925d\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"acf4c778fefa58e17381fd788e81302adca88b723d79ca366db8679a94689899\"" Mar 12 01:39:22.800069 containerd[1457]: time="2026-03-12T01:39:22.799963781Z" level=info msg="StartContainer for \"acf4c778fefa58e17381fd788e81302adca88b723d79ca366db8679a94689899\"" Mar 12 01:39:22.852000 systemd[1]: Started cri-containerd-acf4c778fefa58e17381fd788e81302adca88b723d79ca366db8679a94689899.scope - libcontainer container acf4c778fefa58e17381fd788e81302adca88b723d79ca366db8679a94689899. Mar 12 01:39:22.898889 containerd[1457]: time="2026-03-12T01:39:22.898818154Z" level=info msg="StartContainer for \"acf4c778fefa58e17381fd788e81302adca88b723d79ca366db8679a94689899\" returns successfully" Mar 12 01:39:23.287746 containerd[1457]: time="2026-03-12T01:39:23.287629171Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:39:23.288780 containerd[1457]: time="2026-03-12T01:39:23.288695626Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.31.4: active requests=0, bytes read=8792502" Mar 12 01:39:23.291495 containerd[1457]: time="2026-03-12T01:39:23.291451880Z" level=info msg="ImageCreate event name:\"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:39:23.294361 containerd[1457]: time="2026-03-12T01:39:23.294316143Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:39:23.295021 containerd[1457]: time="2026-03-12T01:39:23.294989283Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.31.4\" with image id \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\", repo tag \"ghcr.io/flatcar/calico/csi:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\", size \"10348547\" in 518.594051ms" Mar 12 01:39:23.295056 containerd[1457]: time="2026-03-12T01:39:23.295027718Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\" returns image reference \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\"" Mar 12 01:39:23.304295 containerd[1457]: time="2026-03-12T01:39:23.304188065Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\"" Mar 12 01:39:23.308903 containerd[1457]: time="2026-03-12T01:39:23.308795722Z" level=info msg="CreateContainer within sandbox \"fdbba97a6d0cde850a9fef406712cddf2684ff0e6ee4fbf9ca5acec500820734\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Mar 12 01:39:23.327290 containerd[1457]: time="2026-03-12T01:39:23.327168804Z" level=info msg="CreateContainer within sandbox \"fdbba97a6d0cde850a9fef406712cddf2684ff0e6ee4fbf9ca5acec500820734\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"6cbc00dbba6721be2c0abd25b6fee2656bcdf1abec113a32b5ab54c17ab13336\"" Mar 12 01:39:23.328311 containerd[1457]: time="2026-03-12T01:39:23.328179171Z" level=info msg="StartContainer for \"6cbc00dbba6721be2c0abd25b6fee2656bcdf1abec113a32b5ab54c17ab13336\"" Mar 12 01:39:23.364040 systemd[1]: Started cri-containerd-6cbc00dbba6721be2c0abd25b6fee2656bcdf1abec113a32b5ab54c17ab13336.scope - libcontainer container 6cbc00dbba6721be2c0abd25b6fee2656bcdf1abec113a32b5ab54c17ab13336. Mar 12 01:39:23.397472 containerd[1457]: time="2026-03-12T01:39:23.397257128Z" level=info msg="StartContainer for \"6cbc00dbba6721be2c0abd25b6fee2656bcdf1abec113a32b5ab54c17ab13336\" returns successfully" Mar 12 01:39:23.444710 kubelet[2511]: I0312 01:39:23.444608 2511 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/goldmane-9f7667bb8-xzc4r" podStartSLOduration=17.951693514 podStartE2EDuration="22.444597888s" podCreationTimestamp="2026-03-12 01:39:01 +0000 UTC" firstStartedPulling="2026-03-12 01:39:18.283185399 +0000 UTC m=+33.436580089" lastFinishedPulling="2026-03-12 01:39:22.776089773 +0000 UTC m=+37.929484463" observedRunningTime="2026-03-12 01:39:23.443471616 +0000 UTC m=+38.596866307" watchObservedRunningTime="2026-03-12 01:39:23.444597888 +0000 UTC m=+38.597992579" Mar 12 01:39:23.445181 kubelet[2511]: I0312 01:39:23.444778 2511 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-apiserver-88f858578-xkxvg" podStartSLOduration=19.365138764 podStartE2EDuration="22.444770143s" podCreationTimestamp="2026-03-12 01:39:01 +0000 UTC" firstStartedPulling="2026-03-12 01:39:18.115400782 +0000 UTC m=+33.268795482" lastFinishedPulling="2026-03-12 01:39:21.195032171 +0000 UTC m=+36.348426861" observedRunningTime="2026-03-12 01:39:21.439127662 +0000 UTC m=+36.592522352" watchObservedRunningTime="2026-03-12 01:39:23.444770143 +0000 UTC m=+38.598164833" Mar 12 01:39:24.717010 containerd[1457]: time="2026-03-12T01:39:24.716945617Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:39:24.718171 containerd[1457]: time="2026-03-12T01:39:24.718093618Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.31.4: active requests=0, bytes read=52406348" Mar 12 01:39:24.719319 containerd[1457]: time="2026-03-12T01:39:24.719261608Z" level=info msg="ImageCreate event name:\"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:39:24.722278 containerd[1457]: time="2026-03-12T01:39:24.722218304Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:39:24.723099 containerd[1457]: time="2026-03-12T01:39:24.723036243Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" with image id \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\", size \"53962361\" in 1.418799985s" Mar 12 01:39:24.723209 containerd[1457]: time="2026-03-12T01:39:24.723101440Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" returns image reference \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\"" Mar 12 01:39:24.724330 containerd[1457]: time="2026-03-12T01:39:24.724037629Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Mar 12 01:39:24.739584 containerd[1457]: time="2026-03-12T01:39:24.739519123Z" level=info msg="CreateContainer within sandbox \"ca2b1ac5926a93ecdd2dbaa3ed04a9cae2fca495abd75c0700f066e66a654fa8\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Mar 12 01:39:24.774502 containerd[1457]: time="2026-03-12T01:39:24.774441835Z" level=info msg="CreateContainer within sandbox \"ca2b1ac5926a93ecdd2dbaa3ed04a9cae2fca495abd75c0700f066e66a654fa8\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"760dc640dcd1e88c6c61878479285bce7f8a7c53e7877da2c386179fb0458362\"" Mar 12 01:39:24.775392 containerd[1457]: time="2026-03-12T01:39:24.775337847Z" level=info msg="StartContainer for \"760dc640dcd1e88c6c61878479285bce7f8a7c53e7877da2c386179fb0458362\"" Mar 12 01:39:24.813961 containerd[1457]: time="2026-03-12T01:39:24.813872368Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:39:24.814820 containerd[1457]: time="2026-03-12T01:39:24.814780956Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=77" Mar 12 01:39:24.817131 containerd[1457]: time="2026-03-12T01:39:24.817024232Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 92.923122ms" Mar 12 01:39:24.817131 containerd[1457]: time="2026-03-12T01:39:24.817092987Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Mar 12 01:39:24.819043 containerd[1457]: time="2026-03-12T01:39:24.818825723Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\"" Mar 12 01:39:24.825165 containerd[1457]: time="2026-03-12T01:39:24.824944179Z" level=info msg="CreateContainer within sandbox \"81d2fea9c77f3d675ab5c6cfd6badf6b6408c5d8a6c9496035076d95c6af5591\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 12 01:39:24.858352 containerd[1457]: time="2026-03-12T01:39:24.858230240Z" level=info msg="CreateContainer within sandbox \"81d2fea9c77f3d675ab5c6cfd6badf6b6408c5d8a6c9496035076d95c6af5591\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"2f3f043721c0bb35fb4509d1161825d9d065639d52271b86e5aff3e804900d12\"" Mar 12 01:39:24.860350 containerd[1457]: time="2026-03-12T01:39:24.859406626Z" level=info msg="StartContainer for \"2f3f043721c0bb35fb4509d1161825d9d065639d52271b86e5aff3e804900d12\"" Mar 12 01:39:24.863822 systemd[1]: Started cri-containerd-760dc640dcd1e88c6c61878479285bce7f8a7c53e7877da2c386179fb0458362.scope - libcontainer container 760dc640dcd1e88c6c61878479285bce7f8a7c53e7877da2c386179fb0458362. Mar 12 01:39:24.897847 systemd[1]: Started cri-containerd-2f3f043721c0bb35fb4509d1161825d9d065639d52271b86e5aff3e804900d12.scope - libcontainer container 2f3f043721c0bb35fb4509d1161825d9d065639d52271b86e5aff3e804900d12. Mar 12 01:39:24.937624 containerd[1457]: time="2026-03-12T01:39:24.937556266Z" level=info msg="StartContainer for \"760dc640dcd1e88c6c61878479285bce7f8a7c53e7877da2c386179fb0458362\" returns successfully" Mar 12 01:39:24.958633 containerd[1457]: time="2026-03-12T01:39:24.958532119Z" level=info msg="StartContainer for \"2f3f043721c0bb35fb4509d1161825d9d065639d52271b86e5aff3e804900d12\" returns successfully" Mar 12 01:39:25.474244 kubelet[2511]: I0312 01:39:25.474109 2511 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-fcfbfb698-xmdzd" podStartSLOduration=17.568831058 podStartE2EDuration="23.474093861s" podCreationTimestamp="2026-03-12 01:39:02 +0000 UTC" firstStartedPulling="2026-03-12 01:39:18.818624595 +0000 UTC m=+33.972019295" lastFinishedPulling="2026-03-12 01:39:24.723887408 +0000 UTC m=+39.877282098" observedRunningTime="2026-03-12 01:39:25.457856535 +0000 UTC m=+40.611251224" watchObservedRunningTime="2026-03-12 01:39:25.474093861 +0000 UTC m=+40.627488551" Mar 12 01:39:25.545941 kubelet[2511]: I0312 01:39:25.545831 2511 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-apiserver-88f858578-8cj8d" podStartSLOduration=18.925353285 podStartE2EDuration="24.544975134s" podCreationTimestamp="2026-03-12 01:39:01 +0000 UTC" firstStartedPulling="2026-03-12 01:39:19.198544754 +0000 UTC m=+34.351939444" lastFinishedPulling="2026-03-12 01:39:24.818166602 +0000 UTC m=+39.971561293" observedRunningTime="2026-03-12 01:39:25.477821946 +0000 UTC m=+40.631216647" watchObservedRunningTime="2026-03-12 01:39:25.544975134 +0000 UTC m=+40.698369824" Mar 12 01:39:26.447120 kubelet[2511]: I0312 01:39:26.447067 2511 prober_manager.go:356] "Failed to trigger a manual run" probe="Readiness" Mar 12 01:39:26.800011 systemd[1]: Started sshd@7-10.0.0.156:22-10.0.0.1:45964.service - OpenSSH per-connection server daemon (10.0.0.1:45964). Mar 12 01:39:26.871112 sshd[5167]: Accepted publickey for core from 10.0.0.1 port 45964 ssh2: RSA SHA256:wDPRTEstPbTKCdTT2qUtCtXrMhNyTtgk56EQ+eNGih8 Mar 12 01:39:26.873339 sshd[5167]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 01:39:26.879098 systemd-logind[1448]: New session 8 of user core. Mar 12 01:39:26.883855 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 12 01:39:27.050971 containerd[1457]: time="2026-03-12T01:39:27.050854108Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:39:27.052133 containerd[1457]: time="2026-03-12T01:39:27.052037839Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.31.4: active requests=0, bytes read=6039889" Mar 12 01:39:27.054393 containerd[1457]: time="2026-03-12T01:39:27.054105995Z" level=info msg="ImageCreate event name:\"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:39:27.059866 containerd[1457]: time="2026-03-12T01:39:27.059807848Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:39:27.060881 containerd[1457]: time="2026-03-12T01:39:27.060605012Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.31.4\" with image id \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\", size \"7595926\" in 2.241732608s" Mar 12 01:39:27.060881 containerd[1457]: time="2026-03-12T01:39:27.060633688Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\" returns image reference \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\"" Mar 12 01:39:27.062278 containerd[1457]: time="2026-03-12T01:39:27.061752986Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\"" Mar 12 01:39:27.067146 containerd[1457]: time="2026-03-12T01:39:27.067084795Z" level=info msg="CreateContainer within sandbox \"0b02333e98ad2c1c30f7c94310237f0b5b1304c24325dd9a12d071f3c349c8f0\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Mar 12 01:39:27.083670 containerd[1457]: time="2026-03-12T01:39:27.083617611Z" level=info msg="CreateContainer within sandbox \"0b02333e98ad2c1c30f7c94310237f0b5b1304c24325dd9a12d071f3c349c8f0\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"3765de36c9e860c02f57002b4c86966f794f59c37b08ec49cb7d2d2185b8a049\"" Mar 12 01:39:27.085284 containerd[1457]: time="2026-03-12T01:39:27.084780157Z" level=info msg="StartContainer for \"3765de36c9e860c02f57002b4c86966f794f59c37b08ec49cb7d2d2185b8a049\"" Mar 12 01:39:27.128931 systemd[1]: Started cri-containerd-3765de36c9e860c02f57002b4c86966f794f59c37b08ec49cb7d2d2185b8a049.scope - libcontainer container 3765de36c9e860c02f57002b4c86966f794f59c37b08ec49cb7d2d2185b8a049. Mar 12 01:39:27.178870 containerd[1457]: time="2026-03-12T01:39:27.178814899Z" level=info msg="StartContainer for \"3765de36c9e860c02f57002b4c86966f794f59c37b08ec49cb7d2d2185b8a049\" returns successfully" Mar 12 01:39:27.222203 sshd[5167]: pam_unix(sshd:session): session closed for user core Mar 12 01:39:27.226888 systemd[1]: sshd@7-10.0.0.156:22-10.0.0.1:45964.service: Deactivated successfully. Mar 12 01:39:27.229315 systemd[1]: session-8.scope: Deactivated successfully. Mar 12 01:39:27.230363 systemd-logind[1448]: Session 8 logged out. Waiting for processes to exit. Mar 12 01:39:27.231586 systemd-logind[1448]: Removed session 8. Mar 12 01:39:28.492713 containerd[1457]: time="2026-03-12T01:39:28.492610003Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:39:28.493808 containerd[1457]: time="2026-03-12T01:39:28.493753658Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4: active requests=0, bytes read=14704317" Mar 12 01:39:28.495399 containerd[1457]: time="2026-03-12T01:39:28.495354411Z" level=info msg="ImageCreate event name:\"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:39:28.498947 containerd[1457]: time="2026-03-12T01:39:28.498900109Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:39:28.499571 containerd[1457]: time="2026-03-12T01:39:28.499524803Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" with image id \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\", size \"16260314\" in 1.437747631s" Mar 12 01:39:28.499571 containerd[1457]: time="2026-03-12T01:39:28.499563648Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" returns image reference \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\"" Mar 12 01:39:28.501143 containerd[1457]: time="2026-03-12T01:39:28.500878338Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\"" Mar 12 01:39:28.504881 containerd[1457]: time="2026-03-12T01:39:28.504796772Z" level=info msg="CreateContainer within sandbox \"fdbba97a6d0cde850a9fef406712cddf2684ff0e6ee4fbf9ca5acec500820734\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Mar 12 01:39:28.523683 containerd[1457]: time="2026-03-12T01:39:28.523599459Z" level=info msg="CreateContainer within sandbox \"fdbba97a6d0cde850a9fef406712cddf2684ff0e6ee4fbf9ca5acec500820734\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"101e8f41fe193249af6b39f0a518e51686257fffbbd3ba707371121ea876ebc6\"" Mar 12 01:39:28.524877 containerd[1457]: time="2026-03-12T01:39:28.524639466Z" level=info msg="StartContainer for \"101e8f41fe193249af6b39f0a518e51686257fffbbd3ba707371121ea876ebc6\"" Mar 12 01:39:28.562825 systemd[1]: Started cri-containerd-101e8f41fe193249af6b39f0a518e51686257fffbbd3ba707371121ea876ebc6.scope - libcontainer container 101e8f41fe193249af6b39f0a518e51686257fffbbd3ba707371121ea876ebc6. Mar 12 01:39:28.610125 containerd[1457]: time="2026-03-12T01:39:28.609956492Z" level=info msg="StartContainer for \"101e8f41fe193249af6b39f0a518e51686257fffbbd3ba707371121ea876ebc6\" returns successfully" Mar 12 01:39:29.318289 kubelet[2511]: I0312 01:39:29.318243 2511 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Mar 12 01:39:29.318289 kubelet[2511]: I0312 01:39:29.318283 2511 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Mar 12 01:39:29.476072 kubelet[2511]: I0312 01:39:29.475937 2511 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/csi-node-driver-r96sx" podStartSLOduration=18.407524388 podStartE2EDuration="28.475922648s" podCreationTimestamp="2026-03-12 01:39:01 +0000 UTC" firstStartedPulling="2026-03-12 01:39:18.431880378 +0000 UTC m=+33.585275078" lastFinishedPulling="2026-03-12 01:39:28.500278648 +0000 UTC m=+43.653673338" observedRunningTime="2026-03-12 01:39:29.47520006 +0000 UTC m=+44.628594791" watchObservedRunningTime="2026-03-12 01:39:29.475922648 +0000 UTC m=+44.629317358" Mar 12 01:39:29.591957 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3987785118.mount: Deactivated successfully. Mar 12 01:39:29.612940 containerd[1457]: time="2026-03-12T01:39:29.612893363Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:39:29.613687 containerd[1457]: time="2026-03-12T01:39:29.613595844Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.31.4: active requests=0, bytes read=17609475" Mar 12 01:39:29.614919 containerd[1457]: time="2026-03-12T01:39:29.614873142Z" level=info msg="ImageCreate event name:\"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:39:29.618495 containerd[1457]: time="2026-03-12T01:39:29.618449709Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:39:29.619780 containerd[1457]: time="2026-03-12T01:39:29.619732079Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" with image id \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\", size \"17609305\" in 1.118817101s" Mar 12 01:39:29.619833 containerd[1457]: time="2026-03-12T01:39:29.619787446Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" returns image reference \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\"" Mar 12 01:39:29.625144 containerd[1457]: time="2026-03-12T01:39:29.625104336Z" level=info msg="CreateContainer within sandbox \"0b02333e98ad2c1c30f7c94310237f0b5b1304c24325dd9a12d071f3c349c8f0\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Mar 12 01:39:29.645506 containerd[1457]: time="2026-03-12T01:39:29.645452083Z" level=info msg="CreateContainer within sandbox \"0b02333e98ad2c1c30f7c94310237f0b5b1304c24325dd9a12d071f3c349c8f0\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"4f87533c3d98a6716bdcadc90885a8ee1f6cc0ed4486d760f148b1e30847d3df\"" Mar 12 01:39:29.646040 containerd[1457]: time="2026-03-12T01:39:29.646002547Z" level=info msg="StartContainer for \"4f87533c3d98a6716bdcadc90885a8ee1f6cc0ed4486d760f148b1e30847d3df\"" Mar 12 01:39:29.698845 systemd[1]: Started cri-containerd-4f87533c3d98a6716bdcadc90885a8ee1f6cc0ed4486d760f148b1e30847d3df.scope - libcontainer container 4f87533c3d98a6716bdcadc90885a8ee1f6cc0ed4486d760f148b1e30847d3df. Mar 12 01:39:29.751931 containerd[1457]: time="2026-03-12T01:39:29.751805338Z" level=info msg="StartContainer for \"4f87533c3d98a6716bdcadc90885a8ee1f6cc0ed4486d760f148b1e30847d3df\" returns successfully" Mar 12 01:39:30.481262 kubelet[2511]: I0312 01:39:30.481118 2511 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/whisker-9c69bdc4d-wmm4h" podStartSLOduration=3.410529566 podStartE2EDuration="12.48110087s" podCreationTimestamp="2026-03-12 01:39:18 +0000 UTC" firstStartedPulling="2026-03-12 01:39:20.550305221 +0000 UTC m=+35.703699921" lastFinishedPulling="2026-03-12 01:39:29.620876525 +0000 UTC m=+44.774271225" observedRunningTime="2026-03-12 01:39:30.480125462 +0000 UTC m=+45.633520162" watchObservedRunningTime="2026-03-12 01:39:30.48110087 +0000 UTC m=+45.634495600" Mar 12 01:39:32.234859 systemd[1]: Started sshd@8-10.0.0.156:22-10.0.0.1:47686.service - OpenSSH per-connection server daemon (10.0.0.1:47686). Mar 12 01:39:32.305512 sshd[5344]: Accepted publickey for core from 10.0.0.1 port 47686 ssh2: RSA SHA256:wDPRTEstPbTKCdTT2qUtCtXrMhNyTtgk56EQ+eNGih8 Mar 12 01:39:32.308775 sshd[5344]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 01:39:32.317328 systemd-logind[1448]: New session 9 of user core. Mar 12 01:39:32.336503 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 12 01:39:32.567080 sshd[5344]: pam_unix(sshd:session): session closed for user core Mar 12 01:39:32.571187 systemd[1]: sshd@8-10.0.0.156:22-10.0.0.1:47686.service: Deactivated successfully. Mar 12 01:39:32.573384 systemd[1]: session-9.scope: Deactivated successfully. Mar 12 01:39:32.574312 systemd-logind[1448]: Session 9 logged out. Waiting for processes to exit. Mar 12 01:39:32.575473 systemd-logind[1448]: Removed session 9. Mar 12 01:39:37.578502 systemd[1]: Started sshd@9-10.0.0.156:22-10.0.0.1:47688.service - OpenSSH per-connection server daemon (10.0.0.1:47688). Mar 12 01:39:37.618410 sshd[5362]: Accepted publickey for core from 10.0.0.1 port 47688 ssh2: RSA SHA256:wDPRTEstPbTKCdTT2qUtCtXrMhNyTtgk56EQ+eNGih8 Mar 12 01:39:37.620118 sshd[5362]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 01:39:37.624546 systemd-logind[1448]: New session 10 of user core. Mar 12 01:39:37.635791 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 12 01:39:37.752126 sshd[5362]: pam_unix(sshd:session): session closed for user core Mar 12 01:39:37.756214 systemd[1]: sshd@9-10.0.0.156:22-10.0.0.1:47688.service: Deactivated successfully. Mar 12 01:39:37.758159 systemd[1]: session-10.scope: Deactivated successfully. Mar 12 01:39:37.758938 systemd-logind[1448]: Session 10 logged out. Waiting for processes to exit. Mar 12 01:39:37.760104 systemd-logind[1448]: Removed session 10. Mar 12 01:39:42.704930 kubelet[2511]: I0312 01:39:42.704883 2511 prober_manager.go:356] "Failed to trigger a manual run" probe="Readiness" Mar 12 01:39:42.768919 systemd[1]: Started sshd@10-10.0.0.156:22-10.0.0.1:37596.service - OpenSSH per-connection server daemon (10.0.0.1:37596). Mar 12 01:39:42.809993 sshd[5411]: Accepted publickey for core from 10.0.0.1 port 37596 ssh2: RSA SHA256:wDPRTEstPbTKCdTT2qUtCtXrMhNyTtgk56EQ+eNGih8 Mar 12 01:39:42.811866 sshd[5411]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 01:39:42.816778 systemd-logind[1448]: New session 11 of user core. Mar 12 01:39:42.823870 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 12 01:39:42.953513 sshd[5411]: pam_unix(sshd:session): session closed for user core Mar 12 01:39:42.961868 systemd[1]: sshd@10-10.0.0.156:22-10.0.0.1:37596.service: Deactivated successfully. Mar 12 01:39:42.963754 systemd[1]: session-11.scope: Deactivated successfully. Mar 12 01:39:42.966794 systemd-logind[1448]: Session 11 logged out. Waiting for processes to exit. Mar 12 01:39:42.973304 systemd[1]: Started sshd@11-10.0.0.156:22-10.0.0.1:37610.service - OpenSSH per-connection server daemon (10.0.0.1:37610). Mar 12 01:39:42.975080 systemd-logind[1448]: Removed session 11. Mar 12 01:39:43.010149 sshd[5426]: Accepted publickey for core from 10.0.0.1 port 37610 ssh2: RSA SHA256:wDPRTEstPbTKCdTT2qUtCtXrMhNyTtgk56EQ+eNGih8 Mar 12 01:39:43.012364 sshd[5426]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 01:39:43.018456 systemd-logind[1448]: New session 12 of user core. Mar 12 01:39:43.028863 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 12 01:39:43.231718 sshd[5426]: pam_unix(sshd:session): session closed for user core Mar 12 01:39:43.241354 systemd[1]: sshd@11-10.0.0.156:22-10.0.0.1:37610.service: Deactivated successfully. Mar 12 01:39:43.243167 systemd[1]: session-12.scope: Deactivated successfully. Mar 12 01:39:43.246857 systemd-logind[1448]: Session 12 logged out. Waiting for processes to exit. Mar 12 01:39:43.258722 systemd[1]: Started sshd@12-10.0.0.156:22-10.0.0.1:37616.service - OpenSSH per-connection server daemon (10.0.0.1:37616). Mar 12 01:39:43.261002 systemd-logind[1448]: Removed session 12. Mar 12 01:39:43.301551 sshd[5439]: Accepted publickey for core from 10.0.0.1 port 37616 ssh2: RSA SHA256:wDPRTEstPbTKCdTT2qUtCtXrMhNyTtgk56EQ+eNGih8 Mar 12 01:39:43.303396 sshd[5439]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 01:39:43.308193 systemd-logind[1448]: New session 13 of user core. Mar 12 01:39:43.314794 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 12 01:39:43.432895 sshd[5439]: pam_unix(sshd:session): session closed for user core Mar 12 01:39:43.437535 systemd[1]: sshd@12-10.0.0.156:22-10.0.0.1:37616.service: Deactivated successfully. Mar 12 01:39:43.439732 systemd[1]: session-13.scope: Deactivated successfully. Mar 12 01:39:43.440498 systemd-logind[1448]: Session 13 logged out. Waiting for processes to exit. Mar 12 01:39:43.441747 systemd-logind[1448]: Removed session 13. Mar 12 01:39:45.055830 containerd[1457]: time="2026-03-12T01:39:45.055709465Z" level=info msg="StopPodSandbox for \"e43b20a03bd0245acc02c35de3db4bfb0c1c52493ecc2aaebe98371206d70720\"" Mar 12 01:39:45.186719 containerd[1457]: 2026-03-12 01:39:45.118 [WARNING][5462] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="e43b20a03bd0245acc02c35de3db4bfb0c1c52493ecc2aaebe98371206d70720" WorkloadEndpoint="localhost-k8s-whisker--866854587--hn6q9-eth0" Mar 12 01:39:45.186719 containerd[1457]: 2026-03-12 01:39:45.118 [INFO][5462] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="e43b20a03bd0245acc02c35de3db4bfb0c1c52493ecc2aaebe98371206d70720" Mar 12 01:39:45.186719 containerd[1457]: 2026-03-12 01:39:45.118 [INFO][5462] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e43b20a03bd0245acc02c35de3db4bfb0c1c52493ecc2aaebe98371206d70720" iface="eth0" netns="" Mar 12 01:39:45.186719 containerd[1457]: 2026-03-12 01:39:45.118 [INFO][5462] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="e43b20a03bd0245acc02c35de3db4bfb0c1c52493ecc2aaebe98371206d70720" Mar 12 01:39:45.186719 containerd[1457]: 2026-03-12 01:39:45.118 [INFO][5462] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="e43b20a03bd0245acc02c35de3db4bfb0c1c52493ecc2aaebe98371206d70720" Mar 12 01:39:45.186719 containerd[1457]: 2026-03-12 01:39:45.169 [INFO][5472] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="e43b20a03bd0245acc02c35de3db4bfb0c1c52493ecc2aaebe98371206d70720" HandleID="k8s-pod-network.e43b20a03bd0245acc02c35de3db4bfb0c1c52493ecc2aaebe98371206d70720" Workload="localhost-k8s-whisker--866854587--hn6q9-eth0" Mar 12 01:39:45.186719 containerd[1457]: 2026-03-12 01:39:45.170 [INFO][5472] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 01:39:45.186719 containerd[1457]: 2026-03-12 01:39:45.170 [INFO][5472] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 01:39:45.186719 containerd[1457]: 2026-03-12 01:39:45.178 [WARNING][5472] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="e43b20a03bd0245acc02c35de3db4bfb0c1c52493ecc2aaebe98371206d70720" HandleID="k8s-pod-network.e43b20a03bd0245acc02c35de3db4bfb0c1c52493ecc2aaebe98371206d70720" Workload="localhost-k8s-whisker--866854587--hn6q9-eth0" Mar 12 01:39:45.186719 containerd[1457]: 2026-03-12 01:39:45.178 [INFO][5472] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="e43b20a03bd0245acc02c35de3db4bfb0c1c52493ecc2aaebe98371206d70720" HandleID="k8s-pod-network.e43b20a03bd0245acc02c35de3db4bfb0c1c52493ecc2aaebe98371206d70720" Workload="localhost-k8s-whisker--866854587--hn6q9-eth0" Mar 12 01:39:45.186719 containerd[1457]: 2026-03-12 01:39:45.179 [INFO][5472] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 01:39:45.186719 containerd[1457]: 2026-03-12 01:39:45.183 [INFO][5462] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="e43b20a03bd0245acc02c35de3db4bfb0c1c52493ecc2aaebe98371206d70720" Mar 12 01:39:45.186719 containerd[1457]: time="2026-03-12T01:39:45.186465479Z" level=info msg="TearDown network for sandbox \"e43b20a03bd0245acc02c35de3db4bfb0c1c52493ecc2aaebe98371206d70720\" successfully" Mar 12 01:39:45.186719 containerd[1457]: time="2026-03-12T01:39:45.186488603Z" level=info msg="StopPodSandbox for \"e43b20a03bd0245acc02c35de3db4bfb0c1c52493ecc2aaebe98371206d70720\" returns successfully" Mar 12 01:39:45.208842 containerd[1457]: time="2026-03-12T01:39:45.208774068Z" level=info msg="RemovePodSandbox for \"e43b20a03bd0245acc02c35de3db4bfb0c1c52493ecc2aaebe98371206d70720\"" Mar 12 01:39:45.211934 containerd[1457]: time="2026-03-12T01:39:45.211885737Z" level=info msg="Forcibly stopping sandbox \"e43b20a03bd0245acc02c35de3db4bfb0c1c52493ecc2aaebe98371206d70720\"" Mar 12 01:39:45.296544 containerd[1457]: 2026-03-12 01:39:45.255 [WARNING][5489] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="e43b20a03bd0245acc02c35de3db4bfb0c1c52493ecc2aaebe98371206d70720" WorkloadEndpoint="localhost-k8s-whisker--866854587--hn6q9-eth0" Mar 12 01:39:45.296544 containerd[1457]: 2026-03-12 01:39:45.255 [INFO][5489] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="e43b20a03bd0245acc02c35de3db4bfb0c1c52493ecc2aaebe98371206d70720" Mar 12 01:39:45.296544 containerd[1457]: 2026-03-12 01:39:45.255 [INFO][5489] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e43b20a03bd0245acc02c35de3db4bfb0c1c52493ecc2aaebe98371206d70720" iface="eth0" netns="" Mar 12 01:39:45.296544 containerd[1457]: 2026-03-12 01:39:45.255 [INFO][5489] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="e43b20a03bd0245acc02c35de3db4bfb0c1c52493ecc2aaebe98371206d70720" Mar 12 01:39:45.296544 containerd[1457]: 2026-03-12 01:39:45.255 [INFO][5489] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="e43b20a03bd0245acc02c35de3db4bfb0c1c52493ecc2aaebe98371206d70720" Mar 12 01:39:45.296544 containerd[1457]: 2026-03-12 01:39:45.281 [INFO][5497] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="e43b20a03bd0245acc02c35de3db4bfb0c1c52493ecc2aaebe98371206d70720" HandleID="k8s-pod-network.e43b20a03bd0245acc02c35de3db4bfb0c1c52493ecc2aaebe98371206d70720" Workload="localhost-k8s-whisker--866854587--hn6q9-eth0" Mar 12 01:39:45.296544 containerd[1457]: 2026-03-12 01:39:45.281 [INFO][5497] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 01:39:45.296544 containerd[1457]: 2026-03-12 01:39:45.282 [INFO][5497] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 01:39:45.296544 containerd[1457]: 2026-03-12 01:39:45.288 [WARNING][5497] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="e43b20a03bd0245acc02c35de3db4bfb0c1c52493ecc2aaebe98371206d70720" HandleID="k8s-pod-network.e43b20a03bd0245acc02c35de3db4bfb0c1c52493ecc2aaebe98371206d70720" Workload="localhost-k8s-whisker--866854587--hn6q9-eth0" Mar 12 01:39:45.296544 containerd[1457]: 2026-03-12 01:39:45.288 [INFO][5497] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="e43b20a03bd0245acc02c35de3db4bfb0c1c52493ecc2aaebe98371206d70720" HandleID="k8s-pod-network.e43b20a03bd0245acc02c35de3db4bfb0c1c52493ecc2aaebe98371206d70720" Workload="localhost-k8s-whisker--866854587--hn6q9-eth0" Mar 12 01:39:45.296544 containerd[1457]: 2026-03-12 01:39:45.290 [INFO][5497] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 01:39:45.296544 containerd[1457]: 2026-03-12 01:39:45.293 [INFO][5489] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="e43b20a03bd0245acc02c35de3db4bfb0c1c52493ecc2aaebe98371206d70720" Mar 12 01:39:45.297059 containerd[1457]: time="2026-03-12T01:39:45.296978785Z" level=info msg="TearDown network for sandbox \"e43b20a03bd0245acc02c35de3db4bfb0c1c52493ecc2aaebe98371206d70720\" successfully" Mar 12 01:39:45.313952 containerd[1457]: time="2026-03-12T01:39:45.313751907Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e43b20a03bd0245acc02c35de3db4bfb0c1c52493ecc2aaebe98371206d70720\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 12 01:39:45.313952 containerd[1457]: time="2026-03-12T01:39:45.313832472Z" level=info msg="RemovePodSandbox \"e43b20a03bd0245acc02c35de3db4bfb0c1c52493ecc2aaebe98371206d70720\" returns successfully" Mar 12 01:39:45.321011 containerd[1457]: time="2026-03-12T01:39:45.320967763Z" level=info msg="StopPodSandbox for \"b120445665233ef642fc7999c6ee5c8b2bc022af3ce8da583c99e82236f2e4e4\"" Mar 12 01:39:45.401575 containerd[1457]: 2026-03-12 01:39:45.358 [WARNING][5515] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b120445665233ef642fc7999c6ee5c8b2bc022af3ce8da583c99e82236f2e4e4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7d764666f9--pjnrc-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"6ac744d1-2bf6-4481-882f-f786a1600883", ResourceVersion:"1036", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 1, 38, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3e475dde2175831a76a096f4c5d3a55ca2e0ba7b7544ecc78b4c9c2ad21d26c1", Pod:"coredns-7d764666f9-pjnrc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0c7849a2a56", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 01:39:45.401575 containerd[1457]: 2026-03-12 01:39:45.358 [INFO][5515] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="b120445665233ef642fc7999c6ee5c8b2bc022af3ce8da583c99e82236f2e4e4" Mar 12 01:39:45.401575 containerd[1457]: 2026-03-12 01:39:45.358 [INFO][5515] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b120445665233ef642fc7999c6ee5c8b2bc022af3ce8da583c99e82236f2e4e4" iface="eth0" netns="" Mar 12 01:39:45.401575 containerd[1457]: 2026-03-12 01:39:45.358 [INFO][5515] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="b120445665233ef642fc7999c6ee5c8b2bc022af3ce8da583c99e82236f2e4e4" Mar 12 01:39:45.401575 containerd[1457]: 2026-03-12 01:39:45.358 [INFO][5515] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="b120445665233ef642fc7999c6ee5c8b2bc022af3ce8da583c99e82236f2e4e4" Mar 12 01:39:45.401575 containerd[1457]: 2026-03-12 01:39:45.387 [INFO][5524] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="b120445665233ef642fc7999c6ee5c8b2bc022af3ce8da583c99e82236f2e4e4" HandleID="k8s-pod-network.b120445665233ef642fc7999c6ee5c8b2bc022af3ce8da583c99e82236f2e4e4" Workload="localhost-k8s-coredns--7d764666f9--pjnrc-eth0" Mar 12 01:39:45.401575 containerd[1457]: 2026-03-12 01:39:45.387 [INFO][5524] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 01:39:45.401575 containerd[1457]: 2026-03-12 01:39:45.387 [INFO][5524] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 01:39:45.401575 containerd[1457]: 2026-03-12 01:39:45.393 [WARNING][5524] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="b120445665233ef642fc7999c6ee5c8b2bc022af3ce8da583c99e82236f2e4e4" HandleID="k8s-pod-network.b120445665233ef642fc7999c6ee5c8b2bc022af3ce8da583c99e82236f2e4e4" Workload="localhost-k8s-coredns--7d764666f9--pjnrc-eth0" Mar 12 01:39:45.401575 containerd[1457]: 2026-03-12 01:39:45.393 [INFO][5524] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="b120445665233ef642fc7999c6ee5c8b2bc022af3ce8da583c99e82236f2e4e4" HandleID="k8s-pod-network.b120445665233ef642fc7999c6ee5c8b2bc022af3ce8da583c99e82236f2e4e4" Workload="localhost-k8s-coredns--7d764666f9--pjnrc-eth0" Mar 12 01:39:45.401575 containerd[1457]: 2026-03-12 01:39:45.395 [INFO][5524] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 01:39:45.401575 containerd[1457]: 2026-03-12 01:39:45.398 [INFO][5515] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="b120445665233ef642fc7999c6ee5c8b2bc022af3ce8da583c99e82236f2e4e4" Mar 12 01:39:45.401575 containerd[1457]: time="2026-03-12T01:39:45.401613342Z" level=info msg="TearDown network for sandbox \"b120445665233ef642fc7999c6ee5c8b2bc022af3ce8da583c99e82236f2e4e4\" successfully" Mar 12 01:39:45.402197 containerd[1457]: time="2026-03-12T01:39:45.401636636Z" level=info msg="StopPodSandbox for \"b120445665233ef642fc7999c6ee5c8b2bc022af3ce8da583c99e82236f2e4e4\" returns successfully" Mar 12 01:39:45.402348 containerd[1457]: time="2026-03-12T01:39:45.402293791Z" level=info msg="RemovePodSandbox for \"b120445665233ef642fc7999c6ee5c8b2bc022af3ce8da583c99e82236f2e4e4\"" Mar 12 01:39:45.402388 containerd[1457]: time="2026-03-12T01:39:45.402358736Z" level=info msg="Forcibly stopping sandbox \"b120445665233ef642fc7999c6ee5c8b2bc022af3ce8da583c99e82236f2e4e4\"" Mar 12 01:39:45.485729 containerd[1457]: 2026-03-12 01:39:45.442 [WARNING][5541] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b120445665233ef642fc7999c6ee5c8b2bc022af3ce8da583c99e82236f2e4e4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7d764666f9--pjnrc-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"6ac744d1-2bf6-4481-882f-f786a1600883", ResourceVersion:"1036", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 1, 38, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3e475dde2175831a76a096f4c5d3a55ca2e0ba7b7544ecc78b4c9c2ad21d26c1", Pod:"coredns-7d764666f9-pjnrc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0c7849a2a56", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 01:39:45.485729 containerd[1457]: 2026-03-12 01:39:45.443 [INFO][5541] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="b120445665233ef642fc7999c6ee5c8b2bc022af3ce8da583c99e82236f2e4e4" Mar 12 01:39:45.485729 containerd[1457]: 2026-03-12 01:39:45.443 [INFO][5541] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b120445665233ef642fc7999c6ee5c8b2bc022af3ce8da583c99e82236f2e4e4" iface="eth0" netns="" Mar 12 01:39:45.485729 containerd[1457]: 2026-03-12 01:39:45.443 [INFO][5541] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="b120445665233ef642fc7999c6ee5c8b2bc022af3ce8da583c99e82236f2e4e4" Mar 12 01:39:45.485729 containerd[1457]: 2026-03-12 01:39:45.443 [INFO][5541] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="b120445665233ef642fc7999c6ee5c8b2bc022af3ce8da583c99e82236f2e4e4" Mar 12 01:39:45.485729 containerd[1457]: 2026-03-12 01:39:45.469 [INFO][5549] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="b120445665233ef642fc7999c6ee5c8b2bc022af3ce8da583c99e82236f2e4e4" HandleID="k8s-pod-network.b120445665233ef642fc7999c6ee5c8b2bc022af3ce8da583c99e82236f2e4e4" Workload="localhost-k8s-coredns--7d764666f9--pjnrc-eth0" Mar 12 01:39:45.485729 containerd[1457]: 2026-03-12 01:39:45.469 [INFO][5549] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 01:39:45.485729 containerd[1457]: 2026-03-12 01:39:45.469 [INFO][5549] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 01:39:45.485729 containerd[1457]: 2026-03-12 01:39:45.476 [WARNING][5549] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="b120445665233ef642fc7999c6ee5c8b2bc022af3ce8da583c99e82236f2e4e4" HandleID="k8s-pod-network.b120445665233ef642fc7999c6ee5c8b2bc022af3ce8da583c99e82236f2e4e4" Workload="localhost-k8s-coredns--7d764666f9--pjnrc-eth0" Mar 12 01:39:45.485729 containerd[1457]: 2026-03-12 01:39:45.476 [INFO][5549] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="b120445665233ef642fc7999c6ee5c8b2bc022af3ce8da583c99e82236f2e4e4" HandleID="k8s-pod-network.b120445665233ef642fc7999c6ee5c8b2bc022af3ce8da583c99e82236f2e4e4" Workload="localhost-k8s-coredns--7d764666f9--pjnrc-eth0" Mar 12 01:39:45.485729 containerd[1457]: 2026-03-12 01:39:45.479 [INFO][5549] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 01:39:45.485729 containerd[1457]: 2026-03-12 01:39:45.483 [INFO][5541] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="b120445665233ef642fc7999c6ee5c8b2bc022af3ce8da583c99e82236f2e4e4" Mar 12 01:39:45.485729 containerd[1457]: time="2026-03-12T01:39:45.485702502Z" level=info msg="TearDown network for sandbox \"b120445665233ef642fc7999c6ee5c8b2bc022af3ce8da583c99e82236f2e4e4\" successfully" Mar 12 01:39:45.497553 containerd[1457]: time="2026-03-12T01:39:45.497468755Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b120445665233ef642fc7999c6ee5c8b2bc022af3ce8da583c99e82236f2e4e4\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 12 01:39:45.497553 containerd[1457]: time="2026-03-12T01:39:45.497533590Z" level=info msg="RemovePodSandbox \"b120445665233ef642fc7999c6ee5c8b2bc022af3ce8da583c99e82236f2e4e4\" returns successfully" Mar 12 01:39:45.498411 containerd[1457]: time="2026-03-12T01:39:45.498064376Z" level=info msg="StopPodSandbox for \"16638a7ae378e11208af9525a3d3bd6edcf99adfb1494b960f2f91c0193672f3\"" Mar 12 01:39:45.581821 containerd[1457]: 2026-03-12 01:39:45.538 [WARNING][5566] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="16638a7ae378e11208af9525a3d3bd6edcf99adfb1494b960f2f91c0193672f3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--88f858578--xkxvg-eth0", GenerateName:"calico-apiserver-88f858578-", Namespace:"calico-system", SelfLink:"", UID:"26e1b944-92fc-431c-8345-7c464f156745", ResourceVersion:"1049", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 1, 39, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"88f858578", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6540db3d0a532d028cac914e0f3671a8e7743b2b65e6517aa1ecfb9220e155f2", Pod:"calico-apiserver-88f858578-xkxvg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"califb758cf90b7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 01:39:45.581821 containerd[1457]: 2026-03-12 01:39:45.538 [INFO][5566] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="16638a7ae378e11208af9525a3d3bd6edcf99adfb1494b960f2f91c0193672f3" Mar 12 01:39:45.581821 containerd[1457]: 2026-03-12 01:39:45.538 [INFO][5566] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="16638a7ae378e11208af9525a3d3bd6edcf99adfb1494b960f2f91c0193672f3" iface="eth0" netns="" Mar 12 01:39:45.581821 containerd[1457]: 2026-03-12 01:39:45.538 [INFO][5566] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="16638a7ae378e11208af9525a3d3bd6edcf99adfb1494b960f2f91c0193672f3" Mar 12 01:39:45.581821 containerd[1457]: 2026-03-12 01:39:45.538 [INFO][5566] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="16638a7ae378e11208af9525a3d3bd6edcf99adfb1494b960f2f91c0193672f3" Mar 12 01:39:45.581821 containerd[1457]: 2026-03-12 01:39:45.566 [INFO][5575] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="16638a7ae378e11208af9525a3d3bd6edcf99adfb1494b960f2f91c0193672f3" HandleID="k8s-pod-network.16638a7ae378e11208af9525a3d3bd6edcf99adfb1494b960f2f91c0193672f3" Workload="localhost-k8s-calico--apiserver--88f858578--xkxvg-eth0" Mar 12 01:39:45.581821 containerd[1457]: 2026-03-12 01:39:45.566 [INFO][5575] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 01:39:45.581821 containerd[1457]: 2026-03-12 01:39:45.566 [INFO][5575] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 01:39:45.581821 containerd[1457]: 2026-03-12 01:39:45.572 [WARNING][5575] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="16638a7ae378e11208af9525a3d3bd6edcf99adfb1494b960f2f91c0193672f3" HandleID="k8s-pod-network.16638a7ae378e11208af9525a3d3bd6edcf99adfb1494b960f2f91c0193672f3" Workload="localhost-k8s-calico--apiserver--88f858578--xkxvg-eth0" Mar 12 01:39:45.581821 containerd[1457]: 2026-03-12 01:39:45.572 [INFO][5575] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="16638a7ae378e11208af9525a3d3bd6edcf99adfb1494b960f2f91c0193672f3" HandleID="k8s-pod-network.16638a7ae378e11208af9525a3d3bd6edcf99adfb1494b960f2f91c0193672f3" Workload="localhost-k8s-calico--apiserver--88f858578--xkxvg-eth0" Mar 12 01:39:45.581821 containerd[1457]: 2026-03-12 01:39:45.574 [INFO][5575] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 01:39:45.581821 containerd[1457]: 2026-03-12 01:39:45.577 [INFO][5566] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="16638a7ae378e11208af9525a3d3bd6edcf99adfb1494b960f2f91c0193672f3" Mar 12 01:39:45.583288 containerd[1457]: time="2026-03-12T01:39:45.583159569Z" level=info msg="TearDown network for sandbox \"16638a7ae378e11208af9525a3d3bd6edcf99adfb1494b960f2f91c0193672f3\" successfully" Mar 12 01:39:45.583288 containerd[1457]: time="2026-03-12T01:39:45.583184086Z" level=info msg="StopPodSandbox for \"16638a7ae378e11208af9525a3d3bd6edcf99adfb1494b960f2f91c0193672f3\" returns successfully" Mar 12 01:39:45.584206 containerd[1457]: time="2026-03-12T01:39:45.584117773Z" level=info msg="RemovePodSandbox for \"16638a7ae378e11208af9525a3d3bd6edcf99adfb1494b960f2f91c0193672f3\"" Mar 12 01:39:45.584206 containerd[1457]: time="2026-03-12T01:39:45.584174503Z" level=info msg="Forcibly stopping sandbox \"16638a7ae378e11208af9525a3d3bd6edcf99adfb1494b960f2f91c0193672f3\"" Mar 12 01:39:45.667168 containerd[1457]: 2026-03-12 01:39:45.629 [WARNING][5593] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="16638a7ae378e11208af9525a3d3bd6edcf99adfb1494b960f2f91c0193672f3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--88f858578--xkxvg-eth0", GenerateName:"calico-apiserver-88f858578-", Namespace:"calico-system", SelfLink:"", UID:"26e1b944-92fc-431c-8345-7c464f156745", ResourceVersion:"1049", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 1, 39, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"88f858578", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6540db3d0a532d028cac914e0f3671a8e7743b2b65e6517aa1ecfb9220e155f2", Pod:"calico-apiserver-88f858578-xkxvg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"califb758cf90b7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 01:39:45.667168 containerd[1457]: 2026-03-12 01:39:45.629 [INFO][5593] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="16638a7ae378e11208af9525a3d3bd6edcf99adfb1494b960f2f91c0193672f3" Mar 12 01:39:45.667168 containerd[1457]: 2026-03-12 01:39:45.629 [INFO][5593] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="16638a7ae378e11208af9525a3d3bd6edcf99adfb1494b960f2f91c0193672f3" iface="eth0" netns="" Mar 12 01:39:45.667168 containerd[1457]: 2026-03-12 01:39:45.629 [INFO][5593] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="16638a7ae378e11208af9525a3d3bd6edcf99adfb1494b960f2f91c0193672f3" Mar 12 01:39:45.667168 containerd[1457]: 2026-03-12 01:39:45.629 [INFO][5593] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="16638a7ae378e11208af9525a3d3bd6edcf99adfb1494b960f2f91c0193672f3" Mar 12 01:39:45.667168 containerd[1457]: 2026-03-12 01:39:45.652 [INFO][5602] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="16638a7ae378e11208af9525a3d3bd6edcf99adfb1494b960f2f91c0193672f3" HandleID="k8s-pod-network.16638a7ae378e11208af9525a3d3bd6edcf99adfb1494b960f2f91c0193672f3" Workload="localhost-k8s-calico--apiserver--88f858578--xkxvg-eth0" Mar 12 01:39:45.667168 containerd[1457]: 2026-03-12 01:39:45.652 [INFO][5602] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 01:39:45.667168 containerd[1457]: 2026-03-12 01:39:45.652 [INFO][5602] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 01:39:45.667168 containerd[1457]: 2026-03-12 01:39:45.659 [WARNING][5602] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="16638a7ae378e11208af9525a3d3bd6edcf99adfb1494b960f2f91c0193672f3" HandleID="k8s-pod-network.16638a7ae378e11208af9525a3d3bd6edcf99adfb1494b960f2f91c0193672f3" Workload="localhost-k8s-calico--apiserver--88f858578--xkxvg-eth0" Mar 12 01:39:45.667168 containerd[1457]: 2026-03-12 01:39:45.659 [INFO][5602] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="16638a7ae378e11208af9525a3d3bd6edcf99adfb1494b960f2f91c0193672f3" HandleID="k8s-pod-network.16638a7ae378e11208af9525a3d3bd6edcf99adfb1494b960f2f91c0193672f3" Workload="localhost-k8s-calico--apiserver--88f858578--xkxvg-eth0" Mar 12 01:39:45.667168 containerd[1457]: 2026-03-12 01:39:45.661 [INFO][5602] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 01:39:45.667168 containerd[1457]: 2026-03-12 01:39:45.664 [INFO][5593] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="16638a7ae378e11208af9525a3d3bd6edcf99adfb1494b960f2f91c0193672f3" Mar 12 01:39:45.667578 containerd[1457]: time="2026-03-12T01:39:45.667236835Z" level=info msg="TearDown network for sandbox \"16638a7ae378e11208af9525a3d3bd6edcf99adfb1494b960f2f91c0193672f3\" successfully" Mar 12 01:39:45.674883 containerd[1457]: time="2026-03-12T01:39:45.674829722Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"16638a7ae378e11208af9525a3d3bd6edcf99adfb1494b960f2f91c0193672f3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 12 01:39:45.674943 containerd[1457]: time="2026-03-12T01:39:45.674891180Z" level=info msg="RemovePodSandbox \"16638a7ae378e11208af9525a3d3bd6edcf99adfb1494b960f2f91c0193672f3\" returns successfully" Mar 12 01:39:45.675461 containerd[1457]: time="2026-03-12T01:39:45.675424092Z" level=info msg="StopPodSandbox for \"e1edb32b2c7462e567eb016e1aaca07805584bed8d72974a30b04f0d07a2264d\"" Mar 12 01:39:45.764813 containerd[1457]: 2026-03-12 01:39:45.722 [WARNING][5621] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e1edb32b2c7462e567eb016e1aaca07805584bed8d72974a30b04f0d07a2264d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--9f7667bb8--xzc4r-eth0", GenerateName:"goldmane-9f7667bb8-", Namespace:"calico-system", SelfLink:"", UID:"6337dcae-e8ff-47d2-900a-5c71524380d4", ResourceVersion:"1066", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 1, 39, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"9f7667bb8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"42784e68c4c63ea116f6fb50be158622a08aa07a8657f9914b0d288f3a78925d", Pod:"goldmane-9f7667bb8-xzc4r", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"califca6a1ae899", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 01:39:45.764813 containerd[1457]: 2026-03-12 01:39:45.722 [INFO][5621] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="e1edb32b2c7462e567eb016e1aaca07805584bed8d72974a30b04f0d07a2264d" Mar 12 01:39:45.764813 containerd[1457]: 2026-03-12 01:39:45.722 [INFO][5621] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e1edb32b2c7462e567eb016e1aaca07805584bed8d72974a30b04f0d07a2264d" iface="eth0" netns="" Mar 12 01:39:45.764813 containerd[1457]: 2026-03-12 01:39:45.722 [INFO][5621] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="e1edb32b2c7462e567eb016e1aaca07805584bed8d72974a30b04f0d07a2264d" Mar 12 01:39:45.764813 containerd[1457]: 2026-03-12 01:39:45.722 [INFO][5621] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="e1edb32b2c7462e567eb016e1aaca07805584bed8d72974a30b04f0d07a2264d" Mar 12 01:39:45.764813 containerd[1457]: 2026-03-12 01:39:45.750 [INFO][5630] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="e1edb32b2c7462e567eb016e1aaca07805584bed8d72974a30b04f0d07a2264d" HandleID="k8s-pod-network.e1edb32b2c7462e567eb016e1aaca07805584bed8d72974a30b04f0d07a2264d" Workload="localhost-k8s-goldmane--9f7667bb8--xzc4r-eth0" Mar 12 01:39:45.764813 containerd[1457]: 2026-03-12 01:39:45.750 [INFO][5630] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 01:39:45.764813 containerd[1457]: 2026-03-12 01:39:45.750 [INFO][5630] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 01:39:45.764813 containerd[1457]: 2026-03-12 01:39:45.757 [WARNING][5630] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="e1edb32b2c7462e567eb016e1aaca07805584bed8d72974a30b04f0d07a2264d" HandleID="k8s-pod-network.e1edb32b2c7462e567eb016e1aaca07805584bed8d72974a30b04f0d07a2264d" Workload="localhost-k8s-goldmane--9f7667bb8--xzc4r-eth0" Mar 12 01:39:45.764813 containerd[1457]: 2026-03-12 01:39:45.757 [INFO][5630] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="e1edb32b2c7462e567eb016e1aaca07805584bed8d72974a30b04f0d07a2264d" HandleID="k8s-pod-network.e1edb32b2c7462e567eb016e1aaca07805584bed8d72974a30b04f0d07a2264d" Workload="localhost-k8s-goldmane--9f7667bb8--xzc4r-eth0" Mar 12 01:39:45.764813 containerd[1457]: 2026-03-12 01:39:45.759 [INFO][5630] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 01:39:45.764813 containerd[1457]: 2026-03-12 01:39:45.762 [INFO][5621] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="e1edb32b2c7462e567eb016e1aaca07805584bed8d72974a30b04f0d07a2264d" Mar 12 01:39:45.765572 containerd[1457]: time="2026-03-12T01:39:45.764843175Z" level=info msg="TearDown network for sandbox \"e1edb32b2c7462e567eb016e1aaca07805584bed8d72974a30b04f0d07a2264d\" successfully" Mar 12 01:39:45.765572 containerd[1457]: time="2026-03-12T01:39:45.764871288Z" level=info msg="StopPodSandbox for \"e1edb32b2c7462e567eb016e1aaca07805584bed8d72974a30b04f0d07a2264d\" returns successfully" Mar 12 01:39:45.765572 containerd[1457]: time="2026-03-12T01:39:45.765489222Z" level=info msg="RemovePodSandbox for \"e1edb32b2c7462e567eb016e1aaca07805584bed8d72974a30b04f0d07a2264d\"" Mar 12 01:39:45.765572 containerd[1457]: time="2026-03-12T01:39:45.765513808Z" level=info msg="Forcibly stopping sandbox \"e1edb32b2c7462e567eb016e1aaca07805584bed8d72974a30b04f0d07a2264d\"" Mar 12 01:39:45.842787 containerd[1457]: 2026-03-12 01:39:45.804 [WARNING][5646] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e1edb32b2c7462e567eb016e1aaca07805584bed8d72974a30b04f0d07a2264d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--9f7667bb8--xzc4r-eth0", GenerateName:"goldmane-9f7667bb8-", Namespace:"calico-system", SelfLink:"", UID:"6337dcae-e8ff-47d2-900a-5c71524380d4", ResourceVersion:"1066", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 1, 39, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"9f7667bb8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"42784e68c4c63ea116f6fb50be158622a08aa07a8657f9914b0d288f3a78925d", Pod:"goldmane-9f7667bb8-xzc4r", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"califca6a1ae899", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 01:39:45.842787 containerd[1457]: 2026-03-12 01:39:45.804 [INFO][5646] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="e1edb32b2c7462e567eb016e1aaca07805584bed8d72974a30b04f0d07a2264d" Mar 12 01:39:45.842787 containerd[1457]: 2026-03-12 01:39:45.804 [INFO][5646] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e1edb32b2c7462e567eb016e1aaca07805584bed8d72974a30b04f0d07a2264d" iface="eth0" netns="" Mar 12 01:39:45.842787 containerd[1457]: 2026-03-12 01:39:45.804 [INFO][5646] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="e1edb32b2c7462e567eb016e1aaca07805584bed8d72974a30b04f0d07a2264d" Mar 12 01:39:45.842787 containerd[1457]: 2026-03-12 01:39:45.804 [INFO][5646] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="e1edb32b2c7462e567eb016e1aaca07805584bed8d72974a30b04f0d07a2264d" Mar 12 01:39:45.842787 containerd[1457]: 2026-03-12 01:39:45.828 [INFO][5654] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="e1edb32b2c7462e567eb016e1aaca07805584bed8d72974a30b04f0d07a2264d" HandleID="k8s-pod-network.e1edb32b2c7462e567eb016e1aaca07805584bed8d72974a30b04f0d07a2264d" Workload="localhost-k8s-goldmane--9f7667bb8--xzc4r-eth0" Mar 12 01:39:45.842787 containerd[1457]: 2026-03-12 01:39:45.828 [INFO][5654] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 01:39:45.842787 containerd[1457]: 2026-03-12 01:39:45.828 [INFO][5654] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 01:39:45.842787 containerd[1457]: 2026-03-12 01:39:45.835 [WARNING][5654] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="e1edb32b2c7462e567eb016e1aaca07805584bed8d72974a30b04f0d07a2264d" HandleID="k8s-pod-network.e1edb32b2c7462e567eb016e1aaca07805584bed8d72974a30b04f0d07a2264d" Workload="localhost-k8s-goldmane--9f7667bb8--xzc4r-eth0" Mar 12 01:39:45.842787 containerd[1457]: 2026-03-12 01:39:45.835 [INFO][5654] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="e1edb32b2c7462e567eb016e1aaca07805584bed8d72974a30b04f0d07a2264d" HandleID="k8s-pod-network.e1edb32b2c7462e567eb016e1aaca07805584bed8d72974a30b04f0d07a2264d" Workload="localhost-k8s-goldmane--9f7667bb8--xzc4r-eth0" Mar 12 01:39:45.842787 containerd[1457]: 2026-03-12 01:39:45.837 [INFO][5654] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 01:39:45.842787 containerd[1457]: 2026-03-12 01:39:45.839 [INFO][5646] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="e1edb32b2c7462e567eb016e1aaca07805584bed8d72974a30b04f0d07a2264d" Mar 12 01:39:45.842787 containerd[1457]: time="2026-03-12T01:39:45.842410256Z" level=info msg="TearDown network for sandbox \"e1edb32b2c7462e567eb016e1aaca07805584bed8d72974a30b04f0d07a2264d\" successfully" Mar 12 01:39:45.847964 containerd[1457]: time="2026-03-12T01:39:45.847917828Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e1edb32b2c7462e567eb016e1aaca07805584bed8d72974a30b04f0d07a2264d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 12 01:39:45.848019 containerd[1457]: time="2026-03-12T01:39:45.847997761Z" level=info msg="RemovePodSandbox \"e1edb32b2c7462e567eb016e1aaca07805584bed8d72974a30b04f0d07a2264d\" returns successfully" Mar 12 01:39:45.848556 containerd[1457]: time="2026-03-12T01:39:45.848531785Z" level=info msg="StopPodSandbox for \"adf0a5e470569e096efcf77d87ba399ee0964f16689d2cea6bec88bb9ecda625\"" Mar 12 01:39:45.928432 containerd[1457]: 2026-03-12 01:39:45.889 [WARNING][5672] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="adf0a5e470569e096efcf77d87ba399ee0964f16689d2cea6bec88bb9ecda625" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7d764666f9--tp6j8-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"0da94b48-bfde-434e-b905-92be304e2a09", ResourceVersion:"1032", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 1, 38, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e4f86eb47678149c76c84761879e832969c0b3cddd2c6e87923185ef2870e6ab", Pod:"coredns-7d764666f9-tp6j8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0bdf599797e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 01:39:45.928432 containerd[1457]: 2026-03-12 01:39:45.889 [INFO][5672] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="adf0a5e470569e096efcf77d87ba399ee0964f16689d2cea6bec88bb9ecda625" Mar 12 01:39:45.928432 containerd[1457]: 2026-03-12 01:39:45.889 [INFO][5672] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="adf0a5e470569e096efcf77d87ba399ee0964f16689d2cea6bec88bb9ecda625" iface="eth0" netns="" Mar 12 01:39:45.928432 containerd[1457]: 2026-03-12 01:39:45.889 [INFO][5672] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="adf0a5e470569e096efcf77d87ba399ee0964f16689d2cea6bec88bb9ecda625" Mar 12 01:39:45.928432 containerd[1457]: 2026-03-12 01:39:45.889 [INFO][5672] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="adf0a5e470569e096efcf77d87ba399ee0964f16689d2cea6bec88bb9ecda625" Mar 12 01:39:45.928432 containerd[1457]: 2026-03-12 01:39:45.915 [INFO][5680] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="adf0a5e470569e096efcf77d87ba399ee0964f16689d2cea6bec88bb9ecda625" HandleID="k8s-pod-network.adf0a5e470569e096efcf77d87ba399ee0964f16689d2cea6bec88bb9ecda625" Workload="localhost-k8s-coredns--7d764666f9--tp6j8-eth0" Mar 12 01:39:45.928432 containerd[1457]: 2026-03-12 01:39:45.916 [INFO][5680] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 01:39:45.928432 containerd[1457]: 2026-03-12 01:39:45.916 [INFO][5680] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 01:39:45.928432 containerd[1457]: 2026-03-12 01:39:45.921 [WARNING][5680] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="adf0a5e470569e096efcf77d87ba399ee0964f16689d2cea6bec88bb9ecda625" HandleID="k8s-pod-network.adf0a5e470569e096efcf77d87ba399ee0964f16689d2cea6bec88bb9ecda625" Workload="localhost-k8s-coredns--7d764666f9--tp6j8-eth0" Mar 12 01:39:45.928432 containerd[1457]: 2026-03-12 01:39:45.921 [INFO][5680] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="adf0a5e470569e096efcf77d87ba399ee0964f16689d2cea6bec88bb9ecda625" HandleID="k8s-pod-network.adf0a5e470569e096efcf77d87ba399ee0964f16689d2cea6bec88bb9ecda625" Workload="localhost-k8s-coredns--7d764666f9--tp6j8-eth0" Mar 12 01:39:45.928432 containerd[1457]: 2026-03-12 01:39:45.923 [INFO][5680] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 01:39:45.928432 containerd[1457]: 2026-03-12 01:39:45.926 [INFO][5672] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="adf0a5e470569e096efcf77d87ba399ee0964f16689d2cea6bec88bb9ecda625" Mar 12 01:39:45.928898 containerd[1457]: time="2026-03-12T01:39:45.928464224Z" level=info msg="TearDown network for sandbox \"adf0a5e470569e096efcf77d87ba399ee0964f16689d2cea6bec88bb9ecda625\" successfully" Mar 12 01:39:45.928898 containerd[1457]: time="2026-03-12T01:39:45.928487869Z" level=info msg="StopPodSandbox for \"adf0a5e470569e096efcf77d87ba399ee0964f16689d2cea6bec88bb9ecda625\" returns successfully" Mar 12 01:39:45.929231 containerd[1457]: time="2026-03-12T01:39:45.929169101Z" level=info msg="RemovePodSandbox for \"adf0a5e470569e096efcf77d87ba399ee0964f16689d2cea6bec88bb9ecda625\"" Mar 12 01:39:45.929231 containerd[1457]: time="2026-03-12T01:39:45.929215851Z" level=info msg="Forcibly stopping sandbox \"adf0a5e470569e096efcf77d87ba399ee0964f16689d2cea6bec88bb9ecda625\"" Mar 12 01:39:46.018591 containerd[1457]: 2026-03-12 01:39:45.971 [WARNING][5698] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="adf0a5e470569e096efcf77d87ba399ee0964f16689d2cea6bec88bb9ecda625" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7d764666f9--tp6j8-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"0da94b48-bfde-434e-b905-92be304e2a09", ResourceVersion:"1032", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 1, 38, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e4f86eb47678149c76c84761879e832969c0b3cddd2c6e87923185ef2870e6ab", Pod:"coredns-7d764666f9-tp6j8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0bdf599797e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 01:39:46.018591 containerd[1457]: 2026-03-12 01:39:45.972 [INFO][5698] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="adf0a5e470569e096efcf77d87ba399ee0964f16689d2cea6bec88bb9ecda625" Mar 12 01:39:46.018591 containerd[1457]: 2026-03-12 01:39:45.972 [INFO][5698] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="adf0a5e470569e096efcf77d87ba399ee0964f16689d2cea6bec88bb9ecda625" iface="eth0" netns="" Mar 12 01:39:46.018591 containerd[1457]: 2026-03-12 01:39:45.972 [INFO][5698] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="adf0a5e470569e096efcf77d87ba399ee0964f16689d2cea6bec88bb9ecda625" Mar 12 01:39:46.018591 containerd[1457]: 2026-03-12 01:39:45.972 [INFO][5698] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="adf0a5e470569e096efcf77d87ba399ee0964f16689d2cea6bec88bb9ecda625" Mar 12 01:39:46.018591 containerd[1457]: 2026-03-12 01:39:46.002 [INFO][5706] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="adf0a5e470569e096efcf77d87ba399ee0964f16689d2cea6bec88bb9ecda625" HandleID="k8s-pod-network.adf0a5e470569e096efcf77d87ba399ee0964f16689d2cea6bec88bb9ecda625" Workload="localhost-k8s-coredns--7d764666f9--tp6j8-eth0" Mar 12 01:39:46.018591 containerd[1457]: 2026-03-12 01:39:46.002 [INFO][5706] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 01:39:46.018591 containerd[1457]: 2026-03-12 01:39:46.002 [INFO][5706] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 01:39:46.018591 containerd[1457]: 2026-03-12 01:39:46.009 [WARNING][5706] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="adf0a5e470569e096efcf77d87ba399ee0964f16689d2cea6bec88bb9ecda625" HandleID="k8s-pod-network.adf0a5e470569e096efcf77d87ba399ee0964f16689d2cea6bec88bb9ecda625" Workload="localhost-k8s-coredns--7d764666f9--tp6j8-eth0" Mar 12 01:39:46.018591 containerd[1457]: 2026-03-12 01:39:46.009 [INFO][5706] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="adf0a5e470569e096efcf77d87ba399ee0964f16689d2cea6bec88bb9ecda625" HandleID="k8s-pod-network.adf0a5e470569e096efcf77d87ba399ee0964f16689d2cea6bec88bb9ecda625" Workload="localhost-k8s-coredns--7d764666f9--tp6j8-eth0" Mar 12 01:39:46.018591 containerd[1457]: 2026-03-12 01:39:46.011 [INFO][5706] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 01:39:46.018591 containerd[1457]: 2026-03-12 01:39:46.014 [INFO][5698] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="adf0a5e470569e096efcf77d87ba399ee0964f16689d2cea6bec88bb9ecda625" Mar 12 01:39:46.018591 containerd[1457]: time="2026-03-12T01:39:46.018218874Z" level=info msg="TearDown network for sandbox \"adf0a5e470569e096efcf77d87ba399ee0964f16689d2cea6bec88bb9ecda625\" successfully" Mar 12 01:39:46.022996 containerd[1457]: time="2026-03-12T01:39:46.022906485Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"adf0a5e470569e096efcf77d87ba399ee0964f16689d2cea6bec88bb9ecda625\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 12 01:39:46.022996 containerd[1457]: time="2026-03-12T01:39:46.022971660Z" level=info msg="RemovePodSandbox \"adf0a5e470569e096efcf77d87ba399ee0964f16689d2cea6bec88bb9ecda625\" returns successfully" Mar 12 01:39:46.023936 containerd[1457]: time="2026-03-12T01:39:46.023890057Z" level=info msg="StopPodSandbox for \"dc5f368c1ed70f47cc9a8ac54966e32621eb9a4de80e4e7c05b4ebc20dcdcef2\"" Mar 12 01:39:46.111617 containerd[1457]: 2026-03-12 01:39:46.070 [WARNING][5723] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="dc5f368c1ed70f47cc9a8ac54966e32621eb9a4de80e4e7c05b4ebc20dcdcef2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--fcfbfb698--xmdzd-eth0", GenerateName:"calico-kube-controllers-fcfbfb698-", Namespace:"calico-system", SelfLink:"", UID:"59eed31b-e704-4270-b07c-3c68fa6fc47c", ResourceVersion:"1087", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 1, 39, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"fcfbfb698", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ca2b1ac5926a93ecdd2dbaa3ed04a9cae2fca495abd75c0700f066e66a654fa8", Pod:"calico-kube-controllers-fcfbfb698-xmdzd", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliccd75f62398", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 01:39:46.111617 containerd[1457]: 2026-03-12 01:39:46.070 [INFO][5723] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="dc5f368c1ed70f47cc9a8ac54966e32621eb9a4de80e4e7c05b4ebc20dcdcef2" Mar 12 01:39:46.111617 containerd[1457]: 2026-03-12 01:39:46.070 [INFO][5723] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="dc5f368c1ed70f47cc9a8ac54966e32621eb9a4de80e4e7c05b4ebc20dcdcef2" iface="eth0" netns="" Mar 12 01:39:46.111617 containerd[1457]: 2026-03-12 01:39:46.070 [INFO][5723] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="dc5f368c1ed70f47cc9a8ac54966e32621eb9a4de80e4e7c05b4ebc20dcdcef2" Mar 12 01:39:46.111617 containerd[1457]: 2026-03-12 01:39:46.070 [INFO][5723] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="dc5f368c1ed70f47cc9a8ac54966e32621eb9a4de80e4e7c05b4ebc20dcdcef2" Mar 12 01:39:46.111617 containerd[1457]: 2026-03-12 01:39:46.097 [INFO][5731] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="dc5f368c1ed70f47cc9a8ac54966e32621eb9a4de80e4e7c05b4ebc20dcdcef2" HandleID="k8s-pod-network.dc5f368c1ed70f47cc9a8ac54966e32621eb9a4de80e4e7c05b4ebc20dcdcef2" Workload="localhost-k8s-calico--kube--controllers--fcfbfb698--xmdzd-eth0" Mar 12 01:39:46.111617 containerd[1457]: 2026-03-12 01:39:46.098 [INFO][5731] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 01:39:46.111617 containerd[1457]: 2026-03-12 01:39:46.098 [INFO][5731] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 01:39:46.111617 containerd[1457]: 2026-03-12 01:39:46.104 [WARNING][5731] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="dc5f368c1ed70f47cc9a8ac54966e32621eb9a4de80e4e7c05b4ebc20dcdcef2" HandleID="k8s-pod-network.dc5f368c1ed70f47cc9a8ac54966e32621eb9a4de80e4e7c05b4ebc20dcdcef2" Workload="localhost-k8s-calico--kube--controllers--fcfbfb698--xmdzd-eth0" Mar 12 01:39:46.111617 containerd[1457]: 2026-03-12 01:39:46.104 [INFO][5731] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="dc5f368c1ed70f47cc9a8ac54966e32621eb9a4de80e4e7c05b4ebc20dcdcef2" HandleID="k8s-pod-network.dc5f368c1ed70f47cc9a8ac54966e32621eb9a4de80e4e7c05b4ebc20dcdcef2" Workload="localhost-k8s-calico--kube--controllers--fcfbfb698--xmdzd-eth0" Mar 12 01:39:46.111617 containerd[1457]: 2026-03-12 01:39:46.106 [INFO][5731] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 01:39:46.111617 containerd[1457]: 2026-03-12 01:39:46.108 [INFO][5723] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="dc5f368c1ed70f47cc9a8ac54966e32621eb9a4de80e4e7c05b4ebc20dcdcef2" Mar 12 01:39:46.111617 containerd[1457]: time="2026-03-12T01:39:46.111558682Z" level=info msg="TearDown network for sandbox \"dc5f368c1ed70f47cc9a8ac54966e32621eb9a4de80e4e7c05b4ebc20dcdcef2\" successfully" Mar 12 01:39:46.111617 containerd[1457]: time="2026-03-12T01:39:46.111607796Z" level=info msg="StopPodSandbox for \"dc5f368c1ed70f47cc9a8ac54966e32621eb9a4de80e4e7c05b4ebc20dcdcef2\" returns successfully" Mar 12 01:39:46.112403 containerd[1457]: time="2026-03-12T01:39:46.112236582Z" level=info msg="RemovePodSandbox for \"dc5f368c1ed70f47cc9a8ac54966e32621eb9a4de80e4e7c05b4ebc20dcdcef2\"" Mar 12 01:39:46.112403 containerd[1457]: time="2026-03-12T01:39:46.112257512Z" level=info msg="Forcibly stopping sandbox \"dc5f368c1ed70f47cc9a8ac54966e32621eb9a4de80e4e7c05b4ebc20dcdcef2\"" Mar 12 01:39:46.199968 containerd[1457]: 2026-03-12 01:39:46.157 [WARNING][5748] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="dc5f368c1ed70f47cc9a8ac54966e32621eb9a4de80e4e7c05b4ebc20dcdcef2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--fcfbfb698--xmdzd-eth0", GenerateName:"calico-kube-controllers-fcfbfb698-", Namespace:"calico-system", SelfLink:"", UID:"59eed31b-e704-4270-b07c-3c68fa6fc47c", ResourceVersion:"1087", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 1, 39, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"fcfbfb698", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ca2b1ac5926a93ecdd2dbaa3ed04a9cae2fca495abd75c0700f066e66a654fa8", Pod:"calico-kube-controllers-fcfbfb698-xmdzd", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliccd75f62398", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 01:39:46.199968 containerd[1457]: 2026-03-12 01:39:46.158 [INFO][5748] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="dc5f368c1ed70f47cc9a8ac54966e32621eb9a4de80e4e7c05b4ebc20dcdcef2" Mar 12 01:39:46.199968 containerd[1457]: 2026-03-12 01:39:46.158 [INFO][5748] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="dc5f368c1ed70f47cc9a8ac54966e32621eb9a4de80e4e7c05b4ebc20dcdcef2" iface="eth0" netns="" Mar 12 01:39:46.199968 containerd[1457]: 2026-03-12 01:39:46.158 [INFO][5748] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="dc5f368c1ed70f47cc9a8ac54966e32621eb9a4de80e4e7c05b4ebc20dcdcef2" Mar 12 01:39:46.199968 containerd[1457]: 2026-03-12 01:39:46.158 [INFO][5748] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="dc5f368c1ed70f47cc9a8ac54966e32621eb9a4de80e4e7c05b4ebc20dcdcef2" Mar 12 01:39:46.199968 containerd[1457]: 2026-03-12 01:39:46.185 [INFO][5757] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="dc5f368c1ed70f47cc9a8ac54966e32621eb9a4de80e4e7c05b4ebc20dcdcef2" HandleID="k8s-pod-network.dc5f368c1ed70f47cc9a8ac54966e32621eb9a4de80e4e7c05b4ebc20dcdcef2" Workload="localhost-k8s-calico--kube--controllers--fcfbfb698--xmdzd-eth0" Mar 12 01:39:46.199968 containerd[1457]: 2026-03-12 01:39:46.185 [INFO][5757] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 01:39:46.199968 containerd[1457]: 2026-03-12 01:39:46.185 [INFO][5757] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 01:39:46.199968 containerd[1457]: 2026-03-12 01:39:46.192 [WARNING][5757] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="dc5f368c1ed70f47cc9a8ac54966e32621eb9a4de80e4e7c05b4ebc20dcdcef2" HandleID="k8s-pod-network.dc5f368c1ed70f47cc9a8ac54966e32621eb9a4de80e4e7c05b4ebc20dcdcef2" Workload="localhost-k8s-calico--kube--controllers--fcfbfb698--xmdzd-eth0" Mar 12 01:39:46.199968 containerd[1457]: 2026-03-12 01:39:46.192 [INFO][5757] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="dc5f368c1ed70f47cc9a8ac54966e32621eb9a4de80e4e7c05b4ebc20dcdcef2" HandleID="k8s-pod-network.dc5f368c1ed70f47cc9a8ac54966e32621eb9a4de80e4e7c05b4ebc20dcdcef2" Workload="localhost-k8s-calico--kube--controllers--fcfbfb698--xmdzd-eth0" Mar 12 01:39:46.199968 containerd[1457]: 2026-03-12 01:39:46.194 [INFO][5757] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 01:39:46.199968 containerd[1457]: 2026-03-12 01:39:46.197 [INFO][5748] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="dc5f368c1ed70f47cc9a8ac54966e32621eb9a4de80e4e7c05b4ebc20dcdcef2" Mar 12 01:39:46.200472 containerd[1457]: time="2026-03-12T01:39:46.200002543Z" level=info msg="TearDown network for sandbox \"dc5f368c1ed70f47cc9a8ac54966e32621eb9a4de80e4e7c05b4ebc20dcdcef2\" successfully" Mar 12 01:39:46.205140 containerd[1457]: time="2026-03-12T01:39:46.205093328Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"dc5f368c1ed70f47cc9a8ac54966e32621eb9a4de80e4e7c05b4ebc20dcdcef2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 12 01:39:46.205185 containerd[1457]: time="2026-03-12T01:39:46.205161288Z" level=info msg="RemovePodSandbox \"dc5f368c1ed70f47cc9a8ac54966e32621eb9a4de80e4e7c05b4ebc20dcdcef2\" returns successfully" Mar 12 01:39:46.205866 containerd[1457]: time="2026-03-12T01:39:46.205792088Z" level=info msg="StopPodSandbox for \"e08fb2d5984d3f19acd459d4693d92f4c95fdfe6f61e1cbe65460431321bda4f\"" Mar 12 01:39:46.290228 containerd[1457]: 2026-03-12 01:39:46.248 [WARNING][5775] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e08fb2d5984d3f19acd459d4693d92f4c95fdfe6f61e1cbe65460431321bda4f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--88f858578--8cj8d-eth0", GenerateName:"calico-apiserver-88f858578-", Namespace:"calico-system", SelfLink:"", UID:"ce2d517a-d1f9-487b-8064-201ed4846645", ResourceVersion:"1245", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 1, 39, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"88f858578", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"81d2fea9c77f3d675ab5c6cfd6badf6b6408c5d8a6c9496035076d95c6af5591", Pod:"calico-apiserver-88f858578-8cj8d", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali1d4257d396b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 01:39:46.290228 containerd[1457]: 2026-03-12 01:39:46.248 [INFO][5775] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="e08fb2d5984d3f19acd459d4693d92f4c95fdfe6f61e1cbe65460431321bda4f" Mar 12 01:39:46.290228 containerd[1457]: 2026-03-12 01:39:46.248 [INFO][5775] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e08fb2d5984d3f19acd459d4693d92f4c95fdfe6f61e1cbe65460431321bda4f" iface="eth0" netns="" Mar 12 01:39:46.290228 containerd[1457]: 2026-03-12 01:39:46.248 [INFO][5775] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="e08fb2d5984d3f19acd459d4693d92f4c95fdfe6f61e1cbe65460431321bda4f" Mar 12 01:39:46.290228 containerd[1457]: 2026-03-12 01:39:46.248 [INFO][5775] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="e08fb2d5984d3f19acd459d4693d92f4c95fdfe6f61e1cbe65460431321bda4f" Mar 12 01:39:46.290228 containerd[1457]: 2026-03-12 01:39:46.277 [INFO][5784] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="e08fb2d5984d3f19acd459d4693d92f4c95fdfe6f61e1cbe65460431321bda4f" HandleID="k8s-pod-network.e08fb2d5984d3f19acd459d4693d92f4c95fdfe6f61e1cbe65460431321bda4f" Workload="localhost-k8s-calico--apiserver--88f858578--8cj8d-eth0" Mar 12 01:39:46.290228 containerd[1457]: 2026-03-12 01:39:46.277 [INFO][5784] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 01:39:46.290228 containerd[1457]: 2026-03-12 01:39:46.277 [INFO][5784] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 01:39:46.290228 containerd[1457]: 2026-03-12 01:39:46.283 [WARNING][5784] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="e08fb2d5984d3f19acd459d4693d92f4c95fdfe6f61e1cbe65460431321bda4f" HandleID="k8s-pod-network.e08fb2d5984d3f19acd459d4693d92f4c95fdfe6f61e1cbe65460431321bda4f" Workload="localhost-k8s-calico--apiserver--88f858578--8cj8d-eth0" Mar 12 01:39:46.290228 containerd[1457]: 2026-03-12 01:39:46.283 [INFO][5784] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="e08fb2d5984d3f19acd459d4693d92f4c95fdfe6f61e1cbe65460431321bda4f" HandleID="k8s-pod-network.e08fb2d5984d3f19acd459d4693d92f4c95fdfe6f61e1cbe65460431321bda4f" Workload="localhost-k8s-calico--apiserver--88f858578--8cj8d-eth0" Mar 12 01:39:46.290228 containerd[1457]: 2026-03-12 01:39:46.285 [INFO][5784] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 01:39:46.290228 containerd[1457]: 2026-03-12 01:39:46.287 [INFO][5775] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="e08fb2d5984d3f19acd459d4693d92f4c95fdfe6f61e1cbe65460431321bda4f" Mar 12 01:39:46.290634 containerd[1457]: time="2026-03-12T01:39:46.290245807Z" level=info msg="TearDown network for sandbox \"e08fb2d5984d3f19acd459d4693d92f4c95fdfe6f61e1cbe65460431321bda4f\" successfully" Mar 12 01:39:46.290634 containerd[1457]: time="2026-03-12T01:39:46.290270966Z" level=info msg="StopPodSandbox for \"e08fb2d5984d3f19acd459d4693d92f4c95fdfe6f61e1cbe65460431321bda4f\" returns successfully" Mar 12 01:39:46.290875 containerd[1457]: time="2026-03-12T01:39:46.290820739Z" level=info msg="RemovePodSandbox for \"e08fb2d5984d3f19acd459d4693d92f4c95fdfe6f61e1cbe65460431321bda4f\"" Mar 12 01:39:46.290875 containerd[1457]: time="2026-03-12T01:39:46.290863312Z" level=info msg="Forcibly stopping sandbox \"e08fb2d5984d3f19acd459d4693d92f4c95fdfe6f61e1cbe65460431321bda4f\"" Mar 12 01:39:46.375513 containerd[1457]: 2026-03-12 01:39:46.332 [WARNING][5802] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e08fb2d5984d3f19acd459d4693d92f4c95fdfe6f61e1cbe65460431321bda4f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--88f858578--8cj8d-eth0", GenerateName:"calico-apiserver-88f858578-", Namespace:"calico-system", SelfLink:"", UID:"ce2d517a-d1f9-487b-8064-201ed4846645", ResourceVersion:"1245", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 1, 39, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"88f858578", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"81d2fea9c77f3d675ab5c6cfd6badf6b6408c5d8a6c9496035076d95c6af5591", Pod:"calico-apiserver-88f858578-8cj8d", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali1d4257d396b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 01:39:46.375513 containerd[1457]: 2026-03-12 01:39:46.332 [INFO][5802] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="e08fb2d5984d3f19acd459d4693d92f4c95fdfe6f61e1cbe65460431321bda4f" Mar 12 01:39:46.375513 containerd[1457]: 2026-03-12 01:39:46.332 [INFO][5802] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e08fb2d5984d3f19acd459d4693d92f4c95fdfe6f61e1cbe65460431321bda4f" iface="eth0" netns="" Mar 12 01:39:46.375513 containerd[1457]: 2026-03-12 01:39:46.332 [INFO][5802] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="e08fb2d5984d3f19acd459d4693d92f4c95fdfe6f61e1cbe65460431321bda4f" Mar 12 01:39:46.375513 containerd[1457]: 2026-03-12 01:39:46.332 [INFO][5802] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="e08fb2d5984d3f19acd459d4693d92f4c95fdfe6f61e1cbe65460431321bda4f" Mar 12 01:39:46.375513 containerd[1457]: 2026-03-12 01:39:46.361 [INFO][5810] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="e08fb2d5984d3f19acd459d4693d92f4c95fdfe6f61e1cbe65460431321bda4f" HandleID="k8s-pod-network.e08fb2d5984d3f19acd459d4693d92f4c95fdfe6f61e1cbe65460431321bda4f" Workload="localhost-k8s-calico--apiserver--88f858578--8cj8d-eth0" Mar 12 01:39:46.375513 containerd[1457]: 2026-03-12 01:39:46.362 [INFO][5810] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 01:39:46.375513 containerd[1457]: 2026-03-12 01:39:46.362 [INFO][5810] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 01:39:46.375513 containerd[1457]: 2026-03-12 01:39:46.368 [WARNING][5810] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="e08fb2d5984d3f19acd459d4693d92f4c95fdfe6f61e1cbe65460431321bda4f" HandleID="k8s-pod-network.e08fb2d5984d3f19acd459d4693d92f4c95fdfe6f61e1cbe65460431321bda4f" Workload="localhost-k8s-calico--apiserver--88f858578--8cj8d-eth0" Mar 12 01:39:46.375513 containerd[1457]: 2026-03-12 01:39:46.368 [INFO][5810] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="e08fb2d5984d3f19acd459d4693d92f4c95fdfe6f61e1cbe65460431321bda4f" HandleID="k8s-pod-network.e08fb2d5984d3f19acd459d4693d92f4c95fdfe6f61e1cbe65460431321bda4f" Workload="localhost-k8s-calico--apiserver--88f858578--8cj8d-eth0" Mar 12 01:39:46.375513 containerd[1457]: 2026-03-12 01:39:46.370 [INFO][5810] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 01:39:46.375513 containerd[1457]: 2026-03-12 01:39:46.372 [INFO][5802] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="e08fb2d5984d3f19acd459d4693d92f4c95fdfe6f61e1cbe65460431321bda4f" Mar 12 01:39:46.375513 containerd[1457]: time="2026-03-12T01:39:46.375444469Z" level=info msg="TearDown network for sandbox \"e08fb2d5984d3f19acd459d4693d92f4c95fdfe6f61e1cbe65460431321bda4f\" successfully" Mar 12 01:39:46.380450 containerd[1457]: time="2026-03-12T01:39:46.380358888Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e08fb2d5984d3f19acd459d4693d92f4c95fdfe6f61e1cbe65460431321bda4f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 12 01:39:46.380515 containerd[1457]: time="2026-03-12T01:39:46.380450514Z" level=info msg="RemovePodSandbox \"e08fb2d5984d3f19acd459d4693d92f4c95fdfe6f61e1cbe65460431321bda4f\" returns successfully" Mar 12 01:39:46.381061 containerd[1457]: time="2026-03-12T01:39:46.381020774Z" level=info msg="StopPodSandbox for \"67d8897535a6b9de6210033cde4f03da90a1e25ba17b1d8bbfa0e0aca9811ba2\"" Mar 12 01:39:46.469320 containerd[1457]: 2026-03-12 01:39:46.423 [WARNING][5828] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="67d8897535a6b9de6210033cde4f03da90a1e25ba17b1d8bbfa0e0aca9811ba2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--r96sx-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"4350a8ed-9db0-4145-8365-af9918373d13", ResourceVersion:"1149", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 1, 39, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"589b8b8d94", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"fdbba97a6d0cde850a9fef406712cddf2684ff0e6ee4fbf9ca5acec500820734", Pod:"csi-node-driver-r96sx", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calid5d58159905", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 01:39:46.469320 containerd[1457]: 2026-03-12 01:39:46.423 [INFO][5828] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="67d8897535a6b9de6210033cde4f03da90a1e25ba17b1d8bbfa0e0aca9811ba2" Mar 12 01:39:46.469320 containerd[1457]: 2026-03-12 01:39:46.423 [INFO][5828] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="67d8897535a6b9de6210033cde4f03da90a1e25ba17b1d8bbfa0e0aca9811ba2" iface="eth0" netns="" Mar 12 01:39:46.469320 containerd[1457]: 2026-03-12 01:39:46.423 [INFO][5828] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="67d8897535a6b9de6210033cde4f03da90a1e25ba17b1d8bbfa0e0aca9811ba2" Mar 12 01:39:46.469320 containerd[1457]: 2026-03-12 01:39:46.423 [INFO][5828] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="67d8897535a6b9de6210033cde4f03da90a1e25ba17b1d8bbfa0e0aca9811ba2" Mar 12 01:39:46.469320 containerd[1457]: 2026-03-12 01:39:46.455 [INFO][5836] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="67d8897535a6b9de6210033cde4f03da90a1e25ba17b1d8bbfa0e0aca9811ba2" HandleID="k8s-pod-network.67d8897535a6b9de6210033cde4f03da90a1e25ba17b1d8bbfa0e0aca9811ba2" Workload="localhost-k8s-csi--node--driver--r96sx-eth0" Mar 12 01:39:46.469320 containerd[1457]: 2026-03-12 01:39:46.455 [INFO][5836] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 01:39:46.469320 containerd[1457]: 2026-03-12 01:39:46.455 [INFO][5836] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 01:39:46.469320 containerd[1457]: 2026-03-12 01:39:46.461 [WARNING][5836] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="67d8897535a6b9de6210033cde4f03da90a1e25ba17b1d8bbfa0e0aca9811ba2" HandleID="k8s-pod-network.67d8897535a6b9de6210033cde4f03da90a1e25ba17b1d8bbfa0e0aca9811ba2" Workload="localhost-k8s-csi--node--driver--r96sx-eth0" Mar 12 01:39:46.469320 containerd[1457]: 2026-03-12 01:39:46.461 [INFO][5836] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="67d8897535a6b9de6210033cde4f03da90a1e25ba17b1d8bbfa0e0aca9811ba2" HandleID="k8s-pod-network.67d8897535a6b9de6210033cde4f03da90a1e25ba17b1d8bbfa0e0aca9811ba2" Workload="localhost-k8s-csi--node--driver--r96sx-eth0" Mar 12 01:39:46.469320 containerd[1457]: 2026-03-12 01:39:46.463 [INFO][5836] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 01:39:46.469320 containerd[1457]: 2026-03-12 01:39:46.466 [INFO][5828] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="67d8897535a6b9de6210033cde4f03da90a1e25ba17b1d8bbfa0e0aca9811ba2" Mar 12 01:39:46.469749 containerd[1457]: time="2026-03-12T01:39:46.469347718Z" level=info msg="TearDown network for sandbox \"67d8897535a6b9de6210033cde4f03da90a1e25ba17b1d8bbfa0e0aca9811ba2\" successfully" Mar 12 01:39:46.469749 containerd[1457]: time="2026-03-12T01:39:46.469399857Z" level=info msg="StopPodSandbox for \"67d8897535a6b9de6210033cde4f03da90a1e25ba17b1d8bbfa0e0aca9811ba2\" returns successfully" Mar 12 01:39:46.470313 containerd[1457]: time="2026-03-12T01:39:46.469998760Z" level=info msg="RemovePodSandbox for \"67d8897535a6b9de6210033cde4f03da90a1e25ba17b1d8bbfa0e0aca9811ba2\"" Mar 12 01:39:46.470313 containerd[1457]: time="2026-03-12T01:39:46.470033565Z" level=info msg="Forcibly stopping sandbox \"67d8897535a6b9de6210033cde4f03da90a1e25ba17b1d8bbfa0e0aca9811ba2\"" Mar 12 01:39:46.563790 containerd[1457]: 2026-03-12 01:39:46.515 [WARNING][5854] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="67d8897535a6b9de6210033cde4f03da90a1e25ba17b1d8bbfa0e0aca9811ba2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--r96sx-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"4350a8ed-9db0-4145-8365-af9918373d13", ResourceVersion:"1149", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 1, 39, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"589b8b8d94", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"fdbba97a6d0cde850a9fef406712cddf2684ff0e6ee4fbf9ca5acec500820734", Pod:"csi-node-driver-r96sx", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calid5d58159905", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 01:39:46.563790 containerd[1457]: 2026-03-12 01:39:46.516 [INFO][5854] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="67d8897535a6b9de6210033cde4f03da90a1e25ba17b1d8bbfa0e0aca9811ba2" Mar 12 01:39:46.563790 containerd[1457]: 2026-03-12 01:39:46.516 [INFO][5854] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="67d8897535a6b9de6210033cde4f03da90a1e25ba17b1d8bbfa0e0aca9811ba2" iface="eth0" netns="" Mar 12 01:39:46.563790 containerd[1457]: 2026-03-12 01:39:46.516 [INFO][5854] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="67d8897535a6b9de6210033cde4f03da90a1e25ba17b1d8bbfa0e0aca9811ba2" Mar 12 01:39:46.563790 containerd[1457]: 2026-03-12 01:39:46.516 [INFO][5854] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="67d8897535a6b9de6210033cde4f03da90a1e25ba17b1d8bbfa0e0aca9811ba2" Mar 12 01:39:46.563790 containerd[1457]: 2026-03-12 01:39:46.548 [INFO][5863] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="67d8897535a6b9de6210033cde4f03da90a1e25ba17b1d8bbfa0e0aca9811ba2" HandleID="k8s-pod-network.67d8897535a6b9de6210033cde4f03da90a1e25ba17b1d8bbfa0e0aca9811ba2" Workload="localhost-k8s-csi--node--driver--r96sx-eth0" Mar 12 01:39:46.563790 containerd[1457]: 2026-03-12 01:39:46.548 [INFO][5863] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 01:39:46.563790 containerd[1457]: 2026-03-12 01:39:46.549 [INFO][5863] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 01:39:46.563790 containerd[1457]: 2026-03-12 01:39:46.556 [WARNING][5863] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="67d8897535a6b9de6210033cde4f03da90a1e25ba17b1d8bbfa0e0aca9811ba2" HandleID="k8s-pod-network.67d8897535a6b9de6210033cde4f03da90a1e25ba17b1d8bbfa0e0aca9811ba2" Workload="localhost-k8s-csi--node--driver--r96sx-eth0" Mar 12 01:39:46.563790 containerd[1457]: 2026-03-12 01:39:46.556 [INFO][5863] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="67d8897535a6b9de6210033cde4f03da90a1e25ba17b1d8bbfa0e0aca9811ba2" HandleID="k8s-pod-network.67d8897535a6b9de6210033cde4f03da90a1e25ba17b1d8bbfa0e0aca9811ba2" Workload="localhost-k8s-csi--node--driver--r96sx-eth0" Mar 12 01:39:46.563790 containerd[1457]: 2026-03-12 01:39:46.558 [INFO][5863] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 01:39:46.563790 containerd[1457]: 2026-03-12 01:39:46.560 [INFO][5854] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="67d8897535a6b9de6210033cde4f03da90a1e25ba17b1d8bbfa0e0aca9811ba2" Mar 12 01:39:46.564202 containerd[1457]: time="2026-03-12T01:39:46.563842640Z" level=info msg="TearDown network for sandbox \"67d8897535a6b9de6210033cde4f03da90a1e25ba17b1d8bbfa0e0aca9811ba2\" successfully" Mar 12 01:39:46.569063 containerd[1457]: time="2026-03-12T01:39:46.569023974Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"67d8897535a6b9de6210033cde4f03da90a1e25ba17b1d8bbfa0e0aca9811ba2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 12 01:39:46.569213 containerd[1457]: time="2026-03-12T01:39:46.569135338Z" level=info msg="RemovePodSandbox \"67d8897535a6b9de6210033cde4f03da90a1e25ba17b1d8bbfa0e0aca9811ba2\" returns successfully" Mar 12 01:39:48.445702 systemd[1]: Started sshd@13-10.0.0.156:22-10.0.0.1:37626.service - OpenSSH per-connection server daemon (10.0.0.1:37626). Mar 12 01:39:48.506202 sshd[5895]: Accepted publickey for core from 10.0.0.1 port 37626 ssh2: RSA SHA256:wDPRTEstPbTKCdTT2qUtCtXrMhNyTtgk56EQ+eNGih8 Mar 12 01:39:48.507947 sshd[5895]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 01:39:48.513038 systemd-logind[1448]: New session 14 of user core. Mar 12 01:39:48.521831 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 12 01:39:48.688546 sshd[5895]: pam_unix(sshd:session): session closed for user core Mar 12 01:39:48.698717 systemd[1]: sshd@13-10.0.0.156:22-10.0.0.1:37626.service: Deactivated successfully. Mar 12 01:39:48.700470 systemd[1]: session-14.scope: Deactivated successfully. Mar 12 01:39:48.702053 systemd-logind[1448]: Session 14 logged out. Waiting for processes to exit. Mar 12 01:39:48.711014 systemd[1]: Started sshd@14-10.0.0.156:22-10.0.0.1:37642.service - OpenSSH per-connection server daemon (10.0.0.1:37642). Mar 12 01:39:48.712036 systemd-logind[1448]: Removed session 14. Mar 12 01:39:48.744992 sshd[5909]: Accepted publickey for core from 10.0.0.1 port 37642 ssh2: RSA SHA256:wDPRTEstPbTKCdTT2qUtCtXrMhNyTtgk56EQ+eNGih8 Mar 12 01:39:48.746749 sshd[5909]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 01:39:48.751696 systemd-logind[1448]: New session 15 of user core. Mar 12 01:39:48.760857 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 12 01:39:49.005814 sshd[5909]: pam_unix(sshd:session): session closed for user core Mar 12 01:39:49.014529 systemd[1]: sshd@14-10.0.0.156:22-10.0.0.1:37642.service: Deactivated successfully. Mar 12 01:39:49.016252 systemd[1]: session-15.scope: Deactivated successfully. Mar 12 01:39:49.017918 systemd-logind[1448]: Session 15 logged out. Waiting for processes to exit. Mar 12 01:39:49.025300 systemd[1]: Started sshd@15-10.0.0.156:22-10.0.0.1:37650.service - OpenSSH per-connection server daemon (10.0.0.1:37650). Mar 12 01:39:49.026241 systemd-logind[1448]: Removed session 15. Mar 12 01:39:49.064934 sshd[5923]: Accepted publickey for core from 10.0.0.1 port 37650 ssh2: RSA SHA256:wDPRTEstPbTKCdTT2qUtCtXrMhNyTtgk56EQ+eNGih8 Mar 12 01:39:49.066597 sshd[5923]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 01:39:49.071151 systemd-logind[1448]: New session 16 of user core. Mar 12 01:39:49.081797 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 12 01:39:49.571061 sshd[5923]: pam_unix(sshd:session): session closed for user core Mar 12 01:39:49.579873 systemd[1]: sshd@15-10.0.0.156:22-10.0.0.1:37650.service: Deactivated successfully. Mar 12 01:39:49.581489 systemd[1]: session-16.scope: Deactivated successfully. Mar 12 01:39:49.585222 systemd-logind[1448]: Session 16 logged out. Waiting for processes to exit. Mar 12 01:39:49.597079 systemd[1]: Started sshd@16-10.0.0.156:22-10.0.0.1:37662.service - OpenSSH per-connection server daemon (10.0.0.1:37662). Mar 12 01:39:49.599076 systemd-logind[1448]: Removed session 16. Mar 12 01:39:49.646494 sshd[5948]: Accepted publickey for core from 10.0.0.1 port 37662 ssh2: RSA SHA256:wDPRTEstPbTKCdTT2qUtCtXrMhNyTtgk56EQ+eNGih8 Mar 12 01:39:49.648262 sshd[5948]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 01:39:49.652906 systemd-logind[1448]: New session 17 of user core. Mar 12 01:39:49.671803 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 12 01:39:50.005953 sshd[5948]: pam_unix(sshd:session): session closed for user core Mar 12 01:39:50.013871 systemd[1]: sshd@16-10.0.0.156:22-10.0.0.1:37662.service: Deactivated successfully. Mar 12 01:39:50.016635 systemd[1]: session-17.scope: Deactivated successfully. Mar 12 01:39:50.019367 systemd-logind[1448]: Session 17 logged out. Waiting for processes to exit. Mar 12 01:39:50.032105 systemd[1]: Started sshd@17-10.0.0.156:22-10.0.0.1:37670.service - OpenSSH per-connection server daemon (10.0.0.1:37670). Mar 12 01:39:50.033339 systemd-logind[1448]: Removed session 17. Mar 12 01:39:50.070080 sshd[5962]: Accepted publickey for core from 10.0.0.1 port 37670 ssh2: RSA SHA256:wDPRTEstPbTKCdTT2qUtCtXrMhNyTtgk56EQ+eNGih8 Mar 12 01:39:50.072153 sshd[5962]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 01:39:50.076962 systemd-logind[1448]: New session 18 of user core. Mar 12 01:39:50.090095 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 12 01:39:50.223708 sshd[5962]: pam_unix(sshd:session): session closed for user core Mar 12 01:39:50.227511 systemd[1]: sshd@17-10.0.0.156:22-10.0.0.1:37670.service: Deactivated successfully. Mar 12 01:39:50.229441 systemd[1]: session-18.scope: Deactivated successfully. Mar 12 01:39:50.230272 systemd-logind[1448]: Session 18 logged out. Waiting for processes to exit. Mar 12 01:39:50.231446 systemd-logind[1448]: Removed session 18. Mar 12 01:39:52.871117 kubelet[2511]: I0312 01:39:52.871024 2511 prober_manager.go:356] "Failed to trigger a manual run" probe="Readiness" Mar 12 01:39:55.238345 systemd[1]: Started sshd@18-10.0.0.156:22-10.0.0.1:45828.service - OpenSSH per-connection server daemon (10.0.0.1:45828). Mar 12 01:39:55.296421 sshd[6049]: Accepted publickey for core from 10.0.0.1 port 45828 ssh2: RSA SHA256:wDPRTEstPbTKCdTT2qUtCtXrMhNyTtgk56EQ+eNGih8 Mar 12 01:39:55.298135 sshd[6049]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 01:39:55.303141 systemd-logind[1448]: New session 19 of user core. Mar 12 01:39:55.314816 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 12 01:39:55.498867 sshd[6049]: pam_unix(sshd:session): session closed for user core Mar 12 01:39:55.502812 systemd[1]: sshd@18-10.0.0.156:22-10.0.0.1:45828.service: Deactivated successfully. Mar 12 01:39:55.504742 systemd[1]: session-19.scope: Deactivated successfully. Mar 12 01:39:55.505515 systemd-logind[1448]: Session 19 logged out. Waiting for processes to exit. Mar 12 01:39:55.507149 systemd-logind[1448]: Removed session 19. Mar 12 01:40:00.509762 systemd[1]: Started sshd@19-10.0.0.156:22-10.0.0.1:45842.service - OpenSSH per-connection server daemon (10.0.0.1:45842). Mar 12 01:40:00.548539 sshd[6084]: Accepted publickey for core from 10.0.0.1 port 45842 ssh2: RSA SHA256:wDPRTEstPbTKCdTT2qUtCtXrMhNyTtgk56EQ+eNGih8 Mar 12 01:40:00.550161 sshd[6084]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 01:40:00.554220 systemd-logind[1448]: New session 20 of user core. Mar 12 01:40:00.559826 systemd[1]: Started session-20.scope - Session 20 of User core. Mar 12 01:40:00.682494 sshd[6084]: pam_unix(sshd:session): session closed for user core Mar 12 01:40:00.686138 systemd[1]: sshd@19-10.0.0.156:22-10.0.0.1:45842.service: Deactivated successfully. Mar 12 01:40:00.687963 systemd[1]: session-20.scope: Deactivated successfully. Mar 12 01:40:00.688679 systemd-logind[1448]: Session 20 logged out. Waiting for processes to exit. Mar 12 01:40:00.689748 systemd-logind[1448]: Removed session 20. Mar 12 01:40:05.697561 systemd[1]: Started sshd@20-10.0.0.156:22-10.0.0.1:36476.service - OpenSSH per-connection server daemon (10.0.0.1:36476). Mar 12 01:40:05.739459 sshd[6114]: Accepted publickey for core from 10.0.0.1 port 36476 ssh2: RSA SHA256:wDPRTEstPbTKCdTT2qUtCtXrMhNyTtgk56EQ+eNGih8 Mar 12 01:40:05.741228 sshd[6114]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 01:40:05.745892 systemd-logind[1448]: New session 21 of user core. Mar 12 01:40:05.756920 systemd[1]: Started session-21.scope - Session 21 of User core. Mar 12 01:40:05.920517 sshd[6114]: pam_unix(sshd:session): session closed for user core Mar 12 01:40:05.924944 systemd[1]: sshd@20-10.0.0.156:22-10.0.0.1:36476.service: Deactivated successfully. Mar 12 01:40:05.926819 systemd[1]: session-21.scope: Deactivated successfully. Mar 12 01:40:05.927514 systemd-logind[1448]: Session 21 logged out. Waiting for processes to exit. Mar 12 01:40:05.928621 systemd-logind[1448]: Removed session 21. Mar 12 01:40:07.095859 kubelet[2511]: E0312 01:40:07.095805 2511 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"