Apr 13 23:29:32.188341 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Mon Apr 13 18:40:27 -00 2026 Apr 13 23:29:32.188369 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=c1ba97db2f6278922cfc5bd0ca74b4bb573fca2c3aed19c121a34271e693e156 Apr 13 23:29:32.188382 kernel: BIOS-provided physical RAM map: Apr 13 23:29:32.188390 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Apr 13 23:29:32.188397 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Apr 13 23:29:32.188404 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Apr 13 23:29:32.188413 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Apr 13 23:29:32.188420 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Apr 13 23:29:32.188427 kernel: BIOS-e820: [mem 0x000000000080c000-0x000000000080ffff] usable Apr 13 23:29:32.188435 kernel: BIOS-e820: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Apr 13 23:29:32.188444 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009c8eefff] usable Apr 13 23:29:32.188451 kernel: BIOS-e820: [mem 0x000000009c8ef000-0x000000009c9eefff] reserved Apr 13 23:29:32.188457 kernel: BIOS-e820: [mem 0x000000009c9ef000-0x000000009caeefff] type 20 Apr 13 23:29:32.188464 kernel: BIOS-e820: [mem 0x000000009caef000-0x000000009cb6efff] reserved Apr 13 23:29:32.188473 kernel: BIOS-e820: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Apr 13 23:29:32.188481 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Apr 13 23:29:32.188491 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Apr 13 23:29:32.188498 kernel: BIOS-e820: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Apr 13 23:29:32.188506 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Apr 13 23:29:32.188513 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Apr 13 23:29:32.188521 kernel: NX (Execute Disable) protection: active Apr 13 23:29:32.188529 kernel: APIC: Static calls initialized Apr 13 23:29:32.188537 kernel: efi: EFI v2.7 by EDK II Apr 13 23:29:32.188544 kernel: efi: SMBIOS=0x9c9ab000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b674118 Apr 13 23:29:32.188552 kernel: SMBIOS 2.8 present. Apr 13 23:29:32.188559 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015 Apr 13 23:29:32.188566 kernel: Hypervisor detected: KVM Apr 13 23:29:32.188575 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Apr 13 23:29:32.188583 kernel: kvm-clock: using sched offset of 6333539569 cycles Apr 13 23:29:32.188591 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Apr 13 23:29:32.188599 kernel: tsc: Detected 2793.438 MHz processor Apr 13 23:29:32.188607 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Apr 13 23:29:32.188616 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Apr 13 23:29:32.188624 kernel: last_pfn = 0x9cf40 max_arch_pfn = 0x10000000000 Apr 13 23:29:32.188632 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Apr 13 23:29:32.188639 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Apr 13 23:29:32.188648 kernel: Using GB pages for direct mapping Apr 13 23:29:32.188655 kernel: Secure boot disabled Apr 13 23:29:32.188662 kernel: ACPI: Early table checksum verification disabled Apr 13 23:29:32.188669 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Apr 13 23:29:32.188682 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Apr 13 23:29:32.188691 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Apr 13 23:29:32.188699 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 13 23:29:32.188710 kernel: ACPI: FACS 0x000000009CBDD000 000040 Apr 13 23:29:32.188718 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 13 23:29:32.188727 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 13 23:29:32.188736 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 13 23:29:32.188744 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 13 23:29:32.188753 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Apr 13 23:29:32.188762 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Apr 13 23:29:32.188773 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] Apr 13 23:29:32.188781 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Apr 13 23:29:32.188790 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Apr 13 23:29:32.188798 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Apr 13 23:29:32.188807 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Apr 13 23:29:32.188816 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Apr 13 23:29:32.188824 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Apr 13 23:29:32.188832 kernel: No NUMA configuration found Apr 13 23:29:32.188840 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cf3ffff] Apr 13 23:29:32.188851 kernel: NODE_DATA(0) allocated [mem 0x9cea6000-0x9ceabfff] Apr 13 23:29:32.188859 kernel: Zone ranges: Apr 13 23:29:32.188867 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Apr 13 23:29:32.188875 kernel: DMA32 [mem 0x0000000001000000-0x000000009cf3ffff] Apr 13 23:29:32.188883 kernel: Normal empty Apr 13 23:29:32.188922 kernel: Movable zone start for each node Apr 13 23:29:32.188930 kernel: Early memory node ranges Apr 13 23:29:32.188938 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Apr 13 23:29:32.188947 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Apr 13 23:29:32.188957 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Apr 13 23:29:32.188965 kernel: node 0: [mem 0x000000000080c000-0x000000000080ffff] Apr 13 23:29:32.188973 kernel: node 0: [mem 0x0000000000900000-0x000000009c8eefff] Apr 13 23:29:32.188982 kernel: node 0: [mem 0x000000009cbff000-0x000000009cf3ffff] Apr 13 23:29:32.189000 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cf3ffff] Apr 13 23:29:32.189009 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 13 23:29:32.189016 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Apr 13 23:29:32.189024 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Apr 13 23:29:32.189032 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 13 23:29:32.189040 kernel: On node 0, zone DMA: 240 pages in unavailable ranges Apr 13 23:29:32.189050 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Apr 13 23:29:32.189057 kernel: On node 0, zone DMA32: 12480 pages in unavailable ranges Apr 13 23:29:32.189065 kernel: ACPI: PM-Timer IO Port: 0x608 Apr 13 23:29:32.189073 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Apr 13 23:29:32.189082 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Apr 13 23:29:32.189090 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Apr 13 23:29:32.189098 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Apr 13 23:29:32.189107 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Apr 13 23:29:32.189115 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Apr 13 23:29:32.189125 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Apr 13 23:29:32.189134 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Apr 13 23:29:32.189142 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Apr 13 23:29:32.189150 kernel: TSC deadline timer available Apr 13 23:29:32.189157 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Apr 13 23:29:32.189165 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Apr 13 23:29:32.189191 kernel: kvm-guest: KVM setup pv remote TLB flush Apr 13 23:29:32.189200 kernel: kvm-guest: setup PV sched yield Apr 13 23:29:32.189208 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Apr 13 23:29:32.189218 kernel: Booting paravirtualized kernel on KVM Apr 13 23:29:32.189226 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Apr 13 23:29:32.189234 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Apr 13 23:29:32.189241 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u524288 Apr 13 23:29:32.189249 kernel: pcpu-alloc: s196328 r8192 d28952 u524288 alloc=1*2097152 Apr 13 23:29:32.189257 kernel: pcpu-alloc: [0] 0 1 2 3 Apr 13 23:29:32.189265 kernel: kvm-guest: PV spinlocks enabled Apr 13 23:29:32.189273 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Apr 13 23:29:32.189282 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=c1ba97db2f6278922cfc5bd0ca74b4bb573fca2c3aed19c121a34271e693e156 Apr 13 23:29:32.189292 kernel: random: crng init done Apr 13 23:29:32.189300 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Apr 13 23:29:32.189308 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 13 23:29:32.189315 kernel: Fallback order for Node 0: 0 Apr 13 23:29:32.189323 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629759 Apr 13 23:29:32.189330 kernel: Policy zone: DMA32 Apr 13 23:29:32.189337 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 13 23:29:32.189345 kernel: Memory: 2394676K/2567000K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42896K init, 2300K bss, 172120K reserved, 0K cma-reserved) Apr 13 23:29:32.189354 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Apr 13 23:29:32.189362 kernel: ftrace: allocating 37996 entries in 149 pages Apr 13 23:29:32.189370 kernel: ftrace: allocated 149 pages with 4 groups Apr 13 23:29:32.189377 kernel: Dynamic Preempt: voluntary Apr 13 23:29:32.189385 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 13 23:29:32.189405 kernel: rcu: RCU event tracing is enabled. Apr 13 23:29:32.189415 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Apr 13 23:29:32.189423 kernel: Trampoline variant of Tasks RCU enabled. Apr 13 23:29:32.189431 kernel: Rude variant of Tasks RCU enabled. Apr 13 23:29:32.189439 kernel: Tracing variant of Tasks RCU enabled. Apr 13 23:29:32.189448 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 13 23:29:32.189456 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Apr 13 23:29:32.189466 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Apr 13 23:29:32.189474 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 13 23:29:32.189483 kernel: Console: colour dummy device 80x25 Apr 13 23:29:32.189491 kernel: printk: console [ttyS0] enabled Apr 13 23:29:32.189499 kernel: ACPI: Core revision 20230628 Apr 13 23:29:32.189510 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Apr 13 23:29:32.189519 kernel: APIC: Switch to symmetric I/O mode setup Apr 13 23:29:32.189529 kernel: x2apic enabled Apr 13 23:29:32.189537 kernel: APIC: Switched APIC routing to: physical x2apic Apr 13 23:29:32.189545 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Apr 13 23:29:32.189553 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Apr 13 23:29:32.189561 kernel: kvm-guest: setup PV IPIs Apr 13 23:29:32.189570 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Apr 13 23:29:32.189578 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x284409db922, max_idle_ns: 440795228871 ns Apr 13 23:29:32.189588 kernel: Calibrating delay loop (skipped) preset value.. 5586.87 BogoMIPS (lpj=2793438) Apr 13 23:29:32.189596 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Apr 13 23:29:32.189604 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Apr 13 23:29:32.189612 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Apr 13 23:29:32.189620 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Apr 13 23:29:32.189628 kernel: Spectre V2 : Mitigation: Retpolines Apr 13 23:29:32.189636 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Apr 13 23:29:32.189644 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Apr 13 23:29:32.189655 kernel: RETBleed: Vulnerable Apr 13 23:29:32.189664 kernel: Speculative Store Bypass: Vulnerable Apr 13 23:29:32.189672 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Apr 13 23:29:32.189681 kernel: GDS: Unknown: Dependent on hypervisor status Apr 13 23:29:32.189689 kernel: active return thunk: its_return_thunk Apr 13 23:29:32.189698 kernel: ITS: Mitigation: Aligned branch/return thunks Apr 13 23:29:32.189706 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Apr 13 23:29:32.189714 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Apr 13 23:29:32.189723 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Apr 13 23:29:32.189733 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Apr 13 23:29:32.189741 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Apr 13 23:29:32.189751 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Apr 13 23:29:32.189759 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Apr 13 23:29:32.189768 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Apr 13 23:29:32.189777 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Apr 13 23:29:32.189786 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Apr 13 23:29:32.189795 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Apr 13 23:29:32.189804 kernel: Freeing SMP alternatives memory: 32K Apr 13 23:29:32.189815 kernel: pid_max: default: 32768 minimum: 301 Apr 13 23:29:32.189823 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Apr 13 23:29:32.189832 kernel: landlock: Up and running. Apr 13 23:29:32.189840 kernel: SELinux: Initializing. Apr 13 23:29:32.189848 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 13 23:29:32.189857 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 13 23:29:32.189865 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8370C CPU @ 2.80GHz (family: 0x6, model: 0x6a, stepping: 0x6) Apr 13 23:29:32.189873 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 13 23:29:32.189882 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 13 23:29:32.189932 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 13 23:29:32.189941 kernel: Performance Events: unsupported p6 CPU model 106 no PMU driver, software events only. Apr 13 23:29:32.189950 kernel: signal: max sigframe size: 3632 Apr 13 23:29:32.189959 kernel: rcu: Hierarchical SRCU implementation. Apr 13 23:29:32.189969 kernel: rcu: Max phase no-delay instances is 400. Apr 13 23:29:32.189978 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Apr 13 23:29:32.189987 kernel: smp: Bringing up secondary CPUs ... Apr 13 23:29:32.189995 kernel: smpboot: x86: Booting SMP configuration: Apr 13 23:29:32.190005 kernel: .... node #0, CPUs: #1 #2 #3 Apr 13 23:29:32.190015 kernel: smp: Brought up 1 node, 4 CPUs Apr 13 23:29:32.190024 kernel: smpboot: Max logical packages: 1 Apr 13 23:29:32.190032 kernel: smpboot: Total of 4 processors activated (22347.50 BogoMIPS) Apr 13 23:29:32.190041 kernel: devtmpfs: initialized Apr 13 23:29:32.190049 kernel: x86/mm: Memory block size: 128MB Apr 13 23:29:32.190057 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Apr 13 23:29:32.190066 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Apr 13 23:29:32.190075 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00810000-0x008fffff] (983040 bytes) Apr 13 23:29:32.190084 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Apr 13 23:29:32.190094 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Apr 13 23:29:32.190102 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 13 23:29:32.190111 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Apr 13 23:29:32.190119 kernel: pinctrl core: initialized pinctrl subsystem Apr 13 23:29:32.190127 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 13 23:29:32.190136 kernel: audit: initializing netlink subsys (disabled) Apr 13 23:29:32.190145 kernel: audit: type=2000 audit(1776122970.634:1): state=initialized audit_enabled=0 res=1 Apr 13 23:29:32.190154 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 13 23:29:32.190162 kernel: thermal_sys: Registered thermal governor 'user_space' Apr 13 23:29:32.190251 kernel: cpuidle: using governor menu Apr 13 23:29:32.190272 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 13 23:29:32.190282 kernel: dca service started, version 1.12.1 Apr 13 23:29:32.190291 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Apr 13 23:29:32.190301 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Apr 13 23:29:32.190310 kernel: PCI: Using configuration type 1 for base access Apr 13 23:29:32.190319 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Apr 13 23:29:32.190328 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 13 23:29:32.190339 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Apr 13 23:29:32.190484 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 13 23:29:32.190503 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Apr 13 23:29:32.190512 kernel: ACPI: Added _OSI(Module Device) Apr 13 23:29:32.190521 kernel: ACPI: Added _OSI(Processor Device) Apr 13 23:29:32.190530 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 13 23:29:32.190539 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Apr 13 23:29:32.190548 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Apr 13 23:29:32.190556 kernel: ACPI: Interpreter enabled Apr 13 23:29:32.190565 kernel: ACPI: PM: (supports S0 S3 S5) Apr 13 23:29:32.190595 kernel: ACPI: Using IOAPIC for interrupt routing Apr 13 23:29:32.190604 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Apr 13 23:29:32.190613 kernel: PCI: Using E820 reservations for host bridge windows Apr 13 23:29:32.190622 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Apr 13 23:29:32.190631 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Apr 13 23:29:32.190821 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Apr 13 23:29:32.191054 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Apr 13 23:29:32.191147 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Apr 13 23:29:32.191159 kernel: PCI host bridge to bus 0000:00 Apr 13 23:29:32.191276 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Apr 13 23:29:32.191348 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Apr 13 23:29:32.191593 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Apr 13 23:29:32.191694 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Apr 13 23:29:32.191853 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Apr 13 23:29:32.192108 kernel: pci_bus 0000:00: root bus resource [mem 0x800000000-0xfffffffff window] Apr 13 23:29:32.192266 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Apr 13 23:29:32.192369 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Apr 13 23:29:32.192456 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Apr 13 23:29:32.192533 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Apr 13 23:29:32.192608 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] Apr 13 23:29:32.192682 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Apr 13 23:29:32.192762 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb Apr 13 23:29:32.192843 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Apr 13 23:29:32.193018 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Apr 13 23:29:32.193100 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] Apr 13 23:29:32.193203 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] Apr 13 23:29:32.193284 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x800000000-0x800003fff 64bit pref] Apr 13 23:29:32.193441 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Apr 13 23:29:32.193527 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] Apr 13 23:29:32.193601 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Apr 13 23:29:32.193678 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x800004000-0x800007fff 64bit pref] Apr 13 23:29:32.193946 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Apr 13 23:29:32.194048 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] Apr 13 23:29:32.194126 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Apr 13 23:29:32.194242 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x800008000-0x80000bfff 64bit pref] Apr 13 23:29:32.194322 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Apr 13 23:29:32.194427 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Apr 13 23:29:32.194506 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Apr 13 23:29:32.194589 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Apr 13 23:29:32.194666 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] Apr 13 23:29:32.194742 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] Apr 13 23:29:32.194831 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Apr 13 23:29:32.194940 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] Apr 13 23:29:32.194951 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Apr 13 23:29:32.194960 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Apr 13 23:29:32.194968 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Apr 13 23:29:32.194977 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Apr 13 23:29:32.194986 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Apr 13 23:29:32.194994 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Apr 13 23:29:32.195006 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Apr 13 23:29:32.195015 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Apr 13 23:29:32.195023 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Apr 13 23:29:32.195031 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Apr 13 23:29:32.195039 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Apr 13 23:29:32.195047 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Apr 13 23:29:32.195056 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Apr 13 23:29:32.195065 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Apr 13 23:29:32.195074 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Apr 13 23:29:32.195084 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Apr 13 23:29:32.195093 kernel: iommu: Default domain type: Translated Apr 13 23:29:32.195101 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Apr 13 23:29:32.195110 kernel: efivars: Registered efivars operations Apr 13 23:29:32.195118 kernel: PCI: Using ACPI for IRQ routing Apr 13 23:29:32.195127 kernel: PCI: pci_cache_line_size set to 64 bytes Apr 13 23:29:32.195136 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Apr 13 23:29:32.195144 kernel: e820: reserve RAM buffer [mem 0x00810000-0x008fffff] Apr 13 23:29:32.195152 kernel: e820: reserve RAM buffer [mem 0x9c8ef000-0x9fffffff] Apr 13 23:29:32.195163 kernel: e820: reserve RAM buffer [mem 0x9cf40000-0x9fffffff] Apr 13 23:29:32.195266 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Apr 13 23:29:32.195342 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Apr 13 23:29:32.195419 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Apr 13 23:29:32.195431 kernel: vgaarb: loaded Apr 13 23:29:32.195441 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Apr 13 23:29:32.195450 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Apr 13 23:29:32.195460 kernel: clocksource: Switched to clocksource kvm-clock Apr 13 23:29:32.195469 kernel: VFS: Disk quotas dquot_6.6.0 Apr 13 23:29:32.195481 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 13 23:29:32.195490 kernel: pnp: PnP ACPI init Apr 13 23:29:32.195661 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Apr 13 23:29:32.195689 kernel: pnp: PnP ACPI: found 6 devices Apr 13 23:29:32.195717 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Apr 13 23:29:32.195735 kernel: NET: Registered PF_INET protocol family Apr 13 23:29:32.195754 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 13 23:29:32.195782 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Apr 13 23:29:32.195822 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 13 23:29:32.195850 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Apr 13 23:29:32.195859 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Apr 13 23:29:32.195868 kernel: TCP: Hash tables configured (established 32768 bind 32768) Apr 13 23:29:32.195877 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 13 23:29:32.195925 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 13 23:29:32.195934 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 13 23:29:32.195943 kernel: NET: Registered PF_XDP protocol family Apr 13 23:29:32.196076 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Apr 13 23:29:32.196213 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Apr 13 23:29:32.196292 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Apr 13 23:29:32.196362 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Apr 13 23:29:32.196429 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Apr 13 23:29:32.196493 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Apr 13 23:29:32.196561 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Apr 13 23:29:32.196627 kernel: pci_bus 0000:00: resource 9 [mem 0x800000000-0xfffffffff window] Apr 13 23:29:32.196641 kernel: PCI: CLS 0 bytes, default 64 Apr 13 23:29:32.196649 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Apr 13 23:29:32.196658 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x284409db922, max_idle_ns: 440795228871 ns Apr 13 23:29:32.196667 kernel: Initialise system trusted keyrings Apr 13 23:29:32.196675 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Apr 13 23:29:32.196684 kernel: Key type asymmetric registered Apr 13 23:29:32.196692 kernel: Asymmetric key parser 'x509' registered Apr 13 23:29:32.196701 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Apr 13 23:29:32.196710 kernel: io scheduler mq-deadline registered Apr 13 23:29:32.196721 kernel: io scheduler kyber registered Apr 13 23:29:32.196730 kernel: io scheduler bfq registered Apr 13 23:29:32.196738 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Apr 13 23:29:32.196748 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Apr 13 23:29:32.196756 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Apr 13 23:29:32.196765 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Apr 13 23:29:32.196773 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 13 23:29:32.196783 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Apr 13 23:29:32.196793 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Apr 13 23:29:32.196804 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Apr 13 23:29:32.196813 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Apr 13 23:29:32.197113 kernel: rtc_cmos 00:04: RTC can wake from S4 Apr 13 23:29:32.197131 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Apr 13 23:29:32.197236 kernel: rtc_cmos 00:04: registered as rtc0 Apr 13 23:29:32.197308 kernel: rtc_cmos 00:04: setting system clock to 2026-04-13T23:29:31 UTC (1776122971) Apr 13 23:29:32.197376 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Apr 13 23:29:32.197387 kernel: intel_pstate: CPU model not supported Apr 13 23:29:32.197400 kernel: efifb: probing for efifb Apr 13 23:29:32.197409 kernel: efifb: framebuffer at 0xc0000000, using 1408k, total 1408k Apr 13 23:29:32.197418 kernel: efifb: mode is 800x600x24, linelength=2400, pages=1 Apr 13 23:29:32.197427 kernel: efifb: scrolling: redraw Apr 13 23:29:32.197436 kernel: efifb: Truecolor: size=0:8:8:8, shift=0:16:8:0 Apr 13 23:29:32.197445 kernel: Console: switching to colour frame buffer device 100x37 Apr 13 23:29:32.197454 kernel: fb0: EFI VGA frame buffer device Apr 13 23:29:32.197479 kernel: pstore: Using crash dump compression: deflate Apr 13 23:29:32.197490 kernel: pstore: Registered efi_pstore as persistent store backend Apr 13 23:29:32.197501 kernel: NET: Registered PF_INET6 protocol family Apr 13 23:29:32.197510 kernel: Segment Routing with IPv6 Apr 13 23:29:32.197519 kernel: In-situ OAM (IOAM) with IPv6 Apr 13 23:29:32.197528 kernel: NET: Registered PF_PACKET protocol family Apr 13 23:29:32.197538 kernel: Key type dns_resolver registered Apr 13 23:29:32.197546 kernel: IPI shorthand broadcast: enabled Apr 13 23:29:32.197554 kernel: sched_clock: Marking stable (1015067451, 332478870)->(1720401072, -372854751) Apr 13 23:29:32.197563 kernel: registered taskstats version 1 Apr 13 23:29:32.197571 kernel: Loading compiled-in X.509 certificates Apr 13 23:29:32.197583 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: 51221ce98a81ccf90ef3d16403b42695603c5d00' Apr 13 23:29:32.199577 kernel: Key type .fscrypt registered Apr 13 23:29:32.199665 kernel: Key type fscrypt-provisioning registered Apr 13 23:29:32.199677 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 13 23:29:32.199689 kernel: ima: Allocated hash algorithm: sha1 Apr 13 23:29:32.199700 kernel: ima: No architecture policies found Apr 13 23:29:32.199711 kernel: clk: Disabling unused clocks Apr 13 23:29:32.199723 kernel: Freeing unused kernel image (initmem) memory: 42896K Apr 13 23:29:32.199733 kernel: Write protecting the kernel read-only data: 36864k Apr 13 23:29:32.199760 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Apr 13 23:29:32.199770 kernel: Run /init as init process Apr 13 23:29:32.199779 kernel: with arguments: Apr 13 23:29:32.199789 kernel: /init Apr 13 23:29:32.199799 kernel: with environment: Apr 13 23:29:32.199808 kernel: HOME=/ Apr 13 23:29:32.199818 kernel: TERM=linux Apr 13 23:29:32.199830 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 13 23:29:32.199846 systemd[1]: Detected virtualization kvm. Apr 13 23:29:32.199859 systemd[1]: Detected architecture x86-64. Apr 13 23:29:32.199869 systemd[1]: Running in initrd. Apr 13 23:29:32.199878 systemd[1]: No hostname configured, using default hostname. Apr 13 23:29:32.200154 systemd[1]: Hostname set to . Apr 13 23:29:32.200168 systemd[1]: Initializing machine ID from VM UUID. Apr 13 23:29:32.200199 systemd[1]: Queued start job for default target initrd.target. Apr 13 23:29:32.200210 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 13 23:29:32.200221 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 13 23:29:32.200232 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 13 23:29:32.200242 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 13 23:29:32.200252 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 13 23:29:32.200261 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 13 23:29:32.200276 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 13 23:29:32.200287 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 13 23:29:32.200298 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 13 23:29:32.200309 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 13 23:29:32.200319 systemd[1]: Reached target paths.target - Path Units. Apr 13 23:29:32.200330 systemd[1]: Reached target slices.target - Slice Units. Apr 13 23:29:32.200341 systemd[1]: Reached target swap.target - Swaps. Apr 13 23:29:32.200353 systemd[1]: Reached target timers.target - Timer Units. Apr 13 23:29:32.200363 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 13 23:29:32.200375 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 13 23:29:32.200387 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 13 23:29:32.200397 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 13 23:29:32.200407 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 13 23:29:32.200417 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 13 23:29:32.200427 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 13 23:29:32.200437 systemd[1]: Reached target sockets.target - Socket Units. Apr 13 23:29:32.200449 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 13 23:29:32.200459 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 13 23:29:32.200470 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 13 23:29:32.200481 systemd[1]: Starting systemd-fsck-usr.service... Apr 13 23:29:32.200491 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 13 23:29:32.200503 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 13 23:29:32.200512 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 13 23:29:32.200521 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 13 23:29:32.200558 systemd-journald[193]: Collecting audit messages is disabled. Apr 13 23:29:32.200584 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 13 23:29:32.200596 systemd[1]: Finished systemd-fsck-usr.service. Apr 13 23:29:32.200610 systemd-journald[193]: Journal started Apr 13 23:29:32.200634 systemd-journald[193]: Runtime Journal (/run/log/journal/31a5d7e2184c444aa82522e7bd4286d0) is 6.0M, max 48.3M, 42.2M free. Apr 13 23:29:32.196205 systemd-modules-load[194]: Inserted module 'overlay' Apr 13 23:29:32.219577 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 13 23:29:32.222381 systemd[1]: Started systemd-journald.service - Journal Service. Apr 13 23:29:32.223825 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 13 23:29:32.225105 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 13 23:29:32.232140 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 13 23:29:32.236143 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 13 23:29:32.245680 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 13 23:29:32.260397 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 13 23:29:32.254351 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 13 23:29:32.303094 kernel: Bridge firewalling registered Apr 13 23:29:32.302642 systemd-modules-load[194]: Inserted module 'br_netfilter' Apr 13 23:29:32.303741 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 13 23:29:32.314743 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 13 23:29:32.315942 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 13 23:29:32.317369 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 13 23:29:32.332033 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 13 23:29:32.338843 dracut-cmdline[226]: dracut-dracut-053 Apr 13 23:29:32.343035 dracut-cmdline[226]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=c1ba97db2f6278922cfc5bd0ca74b4bb573fca2c3aed19c121a34271e693e156 Apr 13 23:29:32.344105 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 13 23:29:32.362218 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 13 23:29:32.397493 systemd-resolved[241]: Positive Trust Anchors: Apr 13 23:29:32.397645 systemd-resolved[241]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 13 23:29:32.397681 systemd-resolved[241]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 13 23:29:32.400822 systemd-resolved[241]: Defaulting to hostname 'linux'. Apr 13 23:29:32.402280 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 13 23:29:32.403781 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 13 23:29:32.544958 kernel: SCSI subsystem initialized Apr 13 23:29:32.557140 kernel: Loading iSCSI transport class v2.0-870. Apr 13 23:29:32.576397 kernel: iscsi: registered transport (tcp) Apr 13 23:29:32.608142 kernel: iscsi: registered transport (qla4xxx) Apr 13 23:29:32.608248 kernel: QLogic iSCSI HBA Driver Apr 13 23:29:32.717137 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 13 23:29:32.733102 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 13 23:29:32.764715 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 13 23:29:32.764796 kernel: device-mapper: uevent: version 1.0.3 Apr 13 23:29:32.764811 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Apr 13 23:29:32.811974 kernel: raid6: avx512x4 gen() 37350 MB/s Apr 13 23:29:32.828972 kernel: raid6: avx512x2 gen() 42398 MB/s Apr 13 23:29:32.846976 kernel: raid6: avx512x1 gen() 35676 MB/s Apr 13 23:29:32.866397 kernel: raid6: avx2x4 gen() 28185 MB/s Apr 13 23:29:32.905126 kernel: raid6: avx2x2 gen() 6269 MB/s Apr 13 23:29:32.924542 kernel: raid6: avx2x1 gen() 3522 MB/s Apr 13 23:29:32.924635 kernel: raid6: using algorithm avx512x2 gen() 42398 MB/s Apr 13 23:29:32.943026 kernel: raid6: .... xor() 23768 MB/s, rmw enabled Apr 13 23:29:32.943107 kernel: raid6: using avx512x2 recovery algorithm Apr 13 23:29:32.989207 kernel: xor: automatically using best checksumming function avx Apr 13 23:29:33.148998 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 13 23:29:33.209800 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 13 23:29:33.219221 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 13 23:29:33.235661 systemd-udevd[414]: Using default interface naming scheme 'v255'. Apr 13 23:29:33.240478 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 13 23:29:33.250243 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 13 23:29:33.268413 dracut-pre-trigger[422]: rd.md=0: removing MD RAID activation Apr 13 23:29:33.303475 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 13 23:29:33.318417 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 13 23:29:33.427981 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 13 23:29:33.437068 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 13 23:29:33.457510 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 13 23:29:33.464496 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 13 23:29:33.470579 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 13 23:29:33.475160 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 13 23:29:33.490138 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 13 23:29:33.496692 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Apr 13 23:29:33.496862 kernel: cryptd: max_cpu_qlen set to 1000 Apr 13 23:29:33.501609 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 13 23:29:33.508530 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Apr 13 23:29:33.517055 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Apr 13 23:29:33.517102 kernel: GPT:9289727 != 19775487 Apr 13 23:29:33.517110 kernel: GPT:Alternate GPT header not at the end of the disk. Apr 13 23:29:33.517117 kernel: GPT:9289727 != 19775487 Apr 13 23:29:33.517124 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 13 23:29:33.517131 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 13 23:29:33.516213 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 13 23:29:33.516392 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 13 23:29:33.526256 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 13 23:29:33.532217 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 13 23:29:33.533601 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 13 23:29:33.541046 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 13 23:29:33.559370 kernel: AVX2 version of gcm_enc/dec engaged. Apr 13 23:29:33.559426 kernel: libata version 3.00 loaded. Apr 13 23:29:33.560446 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 13 23:29:33.562700 kernel: AES CTR mode by8 optimization enabled Apr 13 23:29:33.628079 kernel: BTRFS: device fsid de1edd48-4571-4695-92f0-7af6e33c4e3d devid 1 transid 31 /dev/vda3 scanned by (udev-worker) (459) Apr 13 23:29:33.632962 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (472) Apr 13 23:29:33.636958 kernel: ahci 0000:00:1f.2: version 3.0 Apr 13 23:29:33.637160 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Apr 13 23:29:33.644881 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Apr 13 23:29:33.645125 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Apr 13 23:29:33.648790 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Apr 13 23:29:33.653319 kernel: scsi host0: ahci Apr 13 23:29:33.653494 kernel: scsi host1: ahci Apr 13 23:29:33.653588 kernel: scsi host2: ahci Apr 13 23:29:33.653677 kernel: scsi host3: ahci Apr 13 23:29:33.654702 kernel: scsi host4: ahci Apr 13 23:29:33.660383 kernel: scsi host5: ahci Apr 13 23:29:33.660616 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 Apr 13 23:29:33.660632 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 Apr 13 23:29:33.663433 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 Apr 13 23:29:33.663480 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 Apr 13 23:29:33.668461 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 Apr 13 23:29:33.668517 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 Apr 13 23:29:33.682998 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Apr 13 23:29:33.686800 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Apr 13 23:29:33.689070 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Apr 13 23:29:33.692633 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Apr 13 23:29:33.709147 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 13 23:29:33.710382 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 13 23:29:33.710458 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 13 23:29:33.714770 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 13 23:29:33.723374 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 13 23:29:33.731794 disk-uuid[556]: Primary Header is updated. Apr 13 23:29:33.731794 disk-uuid[556]: Secondary Entries is updated. Apr 13 23:29:33.731794 disk-uuid[556]: Secondary Header is updated. Apr 13 23:29:33.738939 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 13 23:29:33.746949 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 13 23:29:33.752068 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 13 23:29:33.754260 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 13 23:29:33.770133 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 13 23:29:33.795247 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 13 23:29:33.983966 kernel: ata4: SATA link down (SStatus 0 SControl 300) Apr 13 23:29:33.999948 kernel: ata1: SATA link down (SStatus 0 SControl 300) Apr 13 23:29:34.000037 kernel: ata2: SATA link down (SStatus 0 SControl 300) Apr 13 23:29:34.001935 kernel: ata6: SATA link down (SStatus 0 SControl 300) Apr 13 23:29:34.004963 kernel: ata5: SATA link down (SStatus 0 SControl 300) Apr 13 23:29:34.005005 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Apr 13 23:29:34.007634 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Apr 13 23:29:34.007687 kernel: ata3.00: applying bridge limits Apr 13 23:29:34.009626 kernel: ata3.00: configured for UDMA/100 Apr 13 23:29:34.014093 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Apr 13 23:29:34.106256 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Apr 13 23:29:34.106575 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Apr 13 23:29:34.119043 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Apr 13 23:29:34.755218 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 13 23:29:34.755432 disk-uuid[557]: The operation has completed successfully. Apr 13 23:29:34.840411 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 13 23:29:34.840509 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 13 23:29:34.872057 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 13 23:29:34.880775 sh[598]: Success Apr 13 23:29:34.897174 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Apr 13 23:29:34.946554 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 13 23:29:35.010070 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 13 23:29:35.015695 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 13 23:29:35.037976 kernel: BTRFS info (device dm-0): first mount of filesystem de1edd48-4571-4695-92f0-7af6e33c4e3d Apr 13 23:29:35.038048 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Apr 13 23:29:35.038072 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Apr 13 23:29:35.039424 kernel: BTRFS info (device dm-0): disabling log replay at mount time Apr 13 23:29:35.040637 kernel: BTRFS info (device dm-0): using free space tree Apr 13 23:29:35.053831 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 13 23:29:35.059049 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 13 23:29:35.080620 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 13 23:29:35.083617 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 13 23:29:35.097110 kernel: BTRFS info (device vda6): first mount of filesystem 7dd1319a-da93-42af-ac3b-f04d4587a8af Apr 13 23:29:35.097170 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 13 23:29:35.097204 kernel: BTRFS info (device vda6): using free space tree Apr 13 23:29:35.103015 kernel: BTRFS info (device vda6): auto enabling async discard Apr 13 23:29:35.112287 systemd[1]: mnt-oem.mount: Deactivated successfully. Apr 13 23:29:35.117058 kernel: BTRFS info (device vda6): last unmount of filesystem 7dd1319a-da93-42af-ac3b-f04d4587a8af Apr 13 23:29:35.123644 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 13 23:29:35.130303 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 13 23:29:35.190675 ignition[694]: Ignition 2.19.0 Apr 13 23:29:35.190823 ignition[694]: Stage: fetch-offline Apr 13 23:29:35.190875 ignition[694]: no configs at "/usr/lib/ignition/base.d" Apr 13 23:29:35.191059 ignition[694]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 13 23:29:35.191210 ignition[694]: parsed url from cmdline: "" Apr 13 23:29:35.191214 ignition[694]: no config URL provided Apr 13 23:29:35.191221 ignition[694]: reading system config file "/usr/lib/ignition/user.ign" Apr 13 23:29:35.191233 ignition[694]: no config at "/usr/lib/ignition/user.ign" Apr 13 23:29:35.191259 ignition[694]: op(1): [started] loading QEMU firmware config module Apr 13 23:29:35.191264 ignition[694]: op(1): executing: "modprobe" "qemu_fw_cfg" Apr 13 23:29:35.202997 ignition[694]: op(1): [finished] loading QEMU firmware config module Apr 13 23:29:35.204698 ignition[694]: parsing config with SHA512: 9c038d707a3736cf97f1fc8e636679488518bc23fad9068ffc25ca7dff839eb07c7c38906e427205d88000731f45080e4732942c8cb8917dd0ac6e22d96b20fc Apr 13 23:29:35.209828 unknown[694]: fetched base config from "system" Apr 13 23:29:35.209843 unknown[694]: fetched user config from "qemu" Apr 13 23:29:35.210470 ignition[694]: fetch-offline: fetch-offline passed Apr 13 23:29:35.210561 ignition[694]: Ignition finished successfully Apr 13 23:29:35.216420 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 13 23:29:35.241454 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 13 23:29:35.260303 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 13 23:29:35.313879 systemd-networkd[787]: lo: Link UP Apr 13 23:29:35.314026 systemd-networkd[787]: lo: Gained carrier Apr 13 23:29:35.314881 systemd-networkd[787]: Enumeration completed Apr 13 23:29:35.315079 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 13 23:29:35.315398 systemd-networkd[787]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 13 23:29:35.315400 systemd-networkd[787]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 13 23:29:35.315678 systemd[1]: Reached target network.target - Network. Apr 13 23:29:35.316376 systemd-networkd[787]: eth0: Link UP Apr 13 23:29:35.316379 systemd-networkd[787]: eth0: Gained carrier Apr 13 23:29:35.316385 systemd-networkd[787]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 13 23:29:35.319286 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Apr 13 23:29:35.328272 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 13 23:29:35.343990 ignition[790]: Ignition 2.19.0 Apr 13 23:29:35.338019 systemd-networkd[787]: eth0: DHCPv4 address 10.0.0.139/16, gateway 10.0.0.1 acquired from 10.0.0.1 Apr 13 23:29:35.343998 ignition[790]: Stage: kargs Apr 13 23:29:35.348850 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 13 23:29:35.344290 ignition[790]: no configs at "/usr/lib/ignition/base.d" Apr 13 23:29:35.344302 ignition[790]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 13 23:29:35.346537 ignition[790]: kargs: kargs passed Apr 13 23:29:35.346619 ignition[790]: Ignition finished successfully Apr 13 23:29:35.361643 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 13 23:29:35.383958 ignition[800]: Ignition 2.19.0 Apr 13 23:29:35.383977 ignition[800]: Stage: disks Apr 13 23:29:35.384167 ignition[800]: no configs at "/usr/lib/ignition/base.d" Apr 13 23:29:35.384177 ignition[800]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 13 23:29:35.385016 ignition[800]: disks: disks passed Apr 13 23:29:35.388875 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 13 23:29:35.385076 ignition[800]: Ignition finished successfully Apr 13 23:29:35.393281 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 13 23:29:35.394398 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 13 23:29:35.398648 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 13 23:29:35.399324 systemd[1]: Reached target sysinit.target - System Initialization. Apr 13 23:29:35.404403 systemd[1]: Reached target basic.target - Basic System. Apr 13 23:29:35.427360 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 13 23:29:35.442609 systemd-fsck[810]: ROOT: clean, 14/553520 files, 52654/553472 blocks Apr 13 23:29:35.448435 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 13 23:29:35.463375 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 13 23:29:35.616927 kernel: EXT4-fs (vda9): mounted filesystem e02793bf-3e0d-4c7e-b11a-92c664da7ce3 r/w with ordered data mode. Quota mode: none. Apr 13 23:29:35.617338 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 13 23:29:35.618730 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 13 23:29:35.626155 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 13 23:29:35.631231 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 13 23:29:35.635558 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (818) Apr 13 23:29:35.632571 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Apr 13 23:29:35.647512 kernel: BTRFS info (device vda6): first mount of filesystem 7dd1319a-da93-42af-ac3b-f04d4587a8af Apr 13 23:29:35.647535 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 13 23:29:35.647545 kernel: BTRFS info (device vda6): using free space tree Apr 13 23:29:35.647552 kernel: BTRFS info (device vda6): auto enabling async discard Apr 13 23:29:35.632610 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 13 23:29:35.632632 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 13 23:29:35.649732 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 13 23:29:35.680642 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 13 23:29:35.685808 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 13 23:29:35.741686 initrd-setup-root[842]: cut: /sysroot/etc/passwd: No such file or directory Apr 13 23:29:35.747563 initrd-setup-root[849]: cut: /sysroot/etc/group: No such file or directory Apr 13 23:29:35.755308 initrd-setup-root[856]: cut: /sysroot/etc/shadow: No such file or directory Apr 13 23:29:35.761966 initrd-setup-root[863]: cut: /sysroot/etc/gshadow: No such file or directory Apr 13 23:29:35.930989 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 13 23:29:35.945382 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 13 23:29:35.949625 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 13 23:29:36.032982 kernel: BTRFS info (device vda6): last unmount of filesystem 7dd1319a-da93-42af-ac3b-f04d4587a8af Apr 13 23:29:36.035181 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 13 23:29:36.063588 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 13 23:29:36.076228 ignition[931]: INFO : Ignition 2.19.0 Apr 13 23:29:36.076228 ignition[931]: INFO : Stage: mount Apr 13 23:29:36.079872 ignition[931]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 13 23:29:36.079872 ignition[931]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 13 23:29:36.079872 ignition[931]: INFO : mount: mount passed Apr 13 23:29:36.079872 ignition[931]: INFO : Ignition finished successfully Apr 13 23:29:36.090022 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 13 23:29:36.099241 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 13 23:29:36.111800 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 13 23:29:36.129068 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (944) Apr 13 23:29:36.129133 kernel: BTRFS info (device vda6): first mount of filesystem 7dd1319a-da93-42af-ac3b-f04d4587a8af Apr 13 23:29:36.131417 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 13 23:29:36.131478 kernel: BTRFS info (device vda6): using free space tree Apr 13 23:29:36.136936 kernel: BTRFS info (device vda6): auto enabling async discard Apr 13 23:29:36.139297 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 13 23:29:36.177390 ignition[961]: INFO : Ignition 2.19.0 Apr 13 23:29:36.177390 ignition[961]: INFO : Stage: files Apr 13 23:29:36.181126 ignition[961]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 13 23:29:36.181126 ignition[961]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 13 23:29:36.181126 ignition[961]: DEBUG : files: compiled without relabeling support, skipping Apr 13 23:29:36.181126 ignition[961]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 13 23:29:36.181126 ignition[961]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 13 23:29:36.194458 ignition[961]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 13 23:29:36.194458 ignition[961]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 13 23:29:36.194458 ignition[961]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 13 23:29:36.194458 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" Apr 13 23:29:36.194458 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" Apr 13 23:29:36.194458 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 13 23:29:36.194458 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 13 23:29:36.194458 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Apr 13 23:29:36.194458 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Apr 13 23:29:36.194458 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Apr 13 23:29:36.194458 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.4-x86-64.raw: attempt #1 Apr 13 23:29:36.183742 unknown[961]: wrote ssh authorized keys file for user: core Apr 13 23:29:36.658951 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Apr 13 23:29:37.199630 systemd-networkd[787]: eth0: Gained IPv6LL Apr 13 23:29:37.798935 kernel: hrtimer: interrupt took 5888400 ns Apr 13 23:29:38.729394 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Apr 13 23:29:38.729394 ignition[961]: INFO : files: op(7): [started] processing unit "coreos-metadata.service" Apr 13 23:29:38.738422 ignition[961]: INFO : files: op(7): op(8): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Apr 13 23:29:38.738422 ignition[961]: INFO : files: op(7): op(8): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Apr 13 23:29:38.738422 ignition[961]: INFO : files: op(7): [finished] processing unit "coreos-metadata.service" Apr 13 23:29:38.738422 ignition[961]: INFO : files: op(9): [started] setting preset to disabled for "coreos-metadata.service" Apr 13 23:29:38.817734 ignition[961]: INFO : files: op(9): op(a): [started] removing enablement symlink(s) for "coreos-metadata.service" Apr 13 23:29:38.828275 ignition[961]: INFO : files: op(9): op(a): [finished] removing enablement symlink(s) for "coreos-metadata.service" Apr 13 23:29:38.832699 ignition[961]: INFO : files: op(9): [finished] setting preset to disabled for "coreos-metadata.service" Apr 13 23:29:38.832699 ignition[961]: INFO : files: createResultFile: createFiles: op(b): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 13 23:29:38.832699 ignition[961]: INFO : files: createResultFile: createFiles: op(b): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 13 23:29:38.832699 ignition[961]: INFO : files: files passed Apr 13 23:29:38.832699 ignition[961]: INFO : Ignition finished successfully Apr 13 23:29:38.850704 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 13 23:29:38.866373 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 13 23:29:38.872031 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 13 23:29:38.875550 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 13 23:29:38.875672 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 13 23:29:38.892103 initrd-setup-root-after-ignition[989]: grep: /sysroot/oem/oem-release: No such file or directory Apr 13 23:29:38.898735 initrd-setup-root-after-ignition[991]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 13 23:29:38.898735 initrd-setup-root-after-ignition[991]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 13 23:29:38.907907 initrd-setup-root-after-ignition[995]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 13 23:29:38.908489 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 13 23:29:38.914939 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 13 23:29:38.935604 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 13 23:29:39.011964 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 13 23:29:39.012064 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 13 23:29:39.016267 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 13 23:29:39.016578 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 13 23:29:39.020414 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 13 23:29:39.021235 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 13 23:29:39.044172 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 13 23:29:39.058557 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 13 23:29:39.069321 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 13 23:29:39.070256 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 13 23:29:39.074836 systemd[1]: Stopped target timers.target - Timer Units. Apr 13 23:29:39.075575 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 13 23:29:39.075685 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 13 23:29:39.080329 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 13 23:29:39.080767 systemd[1]: Stopped target basic.target - Basic System. Apr 13 23:29:39.085741 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 13 23:29:39.091829 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 13 23:29:39.092912 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 13 23:29:39.100748 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 13 23:29:39.101867 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 13 23:29:39.106728 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 13 23:29:39.110529 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 13 23:29:39.113514 systemd[1]: Stopped target swap.target - Swaps. Apr 13 23:29:39.117494 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 13 23:29:39.117634 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 13 23:29:39.120748 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 13 23:29:39.121819 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 13 23:29:39.133862 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 13 23:29:39.134587 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 13 23:29:39.135367 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 13 23:29:39.135487 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 13 23:29:39.144706 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 13 23:29:39.145084 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 13 23:29:39.145858 systemd[1]: Stopped target paths.target - Path Units. Apr 13 23:29:39.153760 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 13 23:29:39.161710 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 13 23:29:39.173592 systemd[1]: Stopped target slices.target - Slice Units. Apr 13 23:29:39.213785 systemd[1]: Stopped target sockets.target - Socket Units. Apr 13 23:29:39.215757 systemd[1]: iscsid.socket: Deactivated successfully. Apr 13 23:29:39.215961 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 13 23:29:39.221005 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 13 23:29:39.221120 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 13 23:29:39.225663 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 13 23:29:39.226098 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 13 23:29:39.226724 systemd[1]: ignition-files.service: Deactivated successfully. Apr 13 23:29:39.226852 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 13 23:29:39.258536 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 13 23:29:39.263125 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 13 23:29:39.264595 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 13 23:29:39.264807 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 13 23:29:39.282557 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 13 23:29:39.284052 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 13 23:29:39.288400 ignition[1015]: INFO : Ignition 2.19.0 Apr 13 23:29:39.288400 ignition[1015]: INFO : Stage: umount Apr 13 23:29:39.292721 ignition[1015]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 13 23:29:39.292721 ignition[1015]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 13 23:29:39.292721 ignition[1015]: INFO : umount: umount passed Apr 13 23:29:39.292721 ignition[1015]: INFO : Ignition finished successfully Apr 13 23:29:39.293880 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 13 23:29:39.294144 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 13 23:29:39.297348 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 13 23:29:39.297462 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 13 23:29:39.302087 systemd[1]: Stopped target network.target - Network. Apr 13 23:29:39.305030 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 13 23:29:39.305171 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 13 23:29:39.310818 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 13 23:29:39.311008 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 13 23:29:39.312496 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 13 23:29:39.312553 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 13 23:29:39.317672 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 13 23:29:39.317750 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 13 23:29:39.322249 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 13 23:29:39.350633 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 13 23:29:39.354276 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 13 23:29:39.355145 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 13 23:29:39.355771 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 13 23:29:39.359131 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 13 23:29:39.359202 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 13 23:29:39.362056 systemd-networkd[787]: eth0: DHCPv6 lease lost Apr 13 23:29:39.368676 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 13 23:29:39.368805 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 13 23:29:39.373019 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 13 23:29:39.373199 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 13 23:29:39.375353 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 13 23:29:39.375402 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 13 23:29:39.396288 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 13 23:29:39.397924 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 13 23:29:39.397982 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 13 23:29:39.400622 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 13 23:29:39.400675 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 13 23:29:39.404415 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 13 23:29:39.404462 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 13 23:29:39.408432 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 13 23:29:39.408548 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 13 23:29:39.413822 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 13 23:29:39.431686 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 13 23:29:39.431828 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 13 23:29:39.444443 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 13 23:29:39.444861 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 13 23:29:39.453650 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 13 23:29:39.453724 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 13 23:29:39.454996 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 13 23:29:39.455051 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 13 23:29:39.462380 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 13 23:29:39.462518 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 13 23:29:39.504520 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 13 23:29:39.504626 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 13 23:29:39.512512 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 13 23:29:39.512646 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 13 23:29:39.536512 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 13 23:29:39.539803 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 13 23:29:39.539879 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 13 23:29:39.545053 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Apr 13 23:29:39.545110 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 13 23:29:39.547775 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 13 23:29:39.547833 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 13 23:29:39.552727 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 13 23:29:39.552797 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 13 23:29:39.556987 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 13 23:29:39.557090 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 13 23:29:39.561066 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 13 23:29:39.581438 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 13 23:29:39.594555 systemd[1]: Switching root. Apr 13 23:29:39.627476 systemd-journald[193]: Journal stopped Apr 13 23:29:41.085470 systemd-journald[193]: Received SIGTERM from PID 1 (systemd). Apr 13 23:29:41.085573 kernel: SELinux: policy capability network_peer_controls=1 Apr 13 23:29:41.085594 kernel: SELinux: policy capability open_perms=1 Apr 13 23:29:41.085608 kernel: SELinux: policy capability extended_socket_class=1 Apr 13 23:29:41.085622 kernel: SELinux: policy capability always_check_network=0 Apr 13 23:29:41.085632 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 13 23:29:41.085642 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 13 23:29:41.085656 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 13 23:29:41.085666 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 13 23:29:41.085677 kernel: audit: type=1403 audit(1776122979.821:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Apr 13 23:29:41.085689 systemd[1]: Successfully loaded SELinux policy in 54.703ms. Apr 13 23:29:41.085708 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 14.701ms. Apr 13 23:29:41.085722 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 13 23:29:41.085734 systemd[1]: Detected virtualization kvm. Apr 13 23:29:41.085746 systemd[1]: Detected architecture x86-64. Apr 13 23:29:41.085756 systemd[1]: Detected first boot. Apr 13 23:29:41.085766 systemd[1]: Initializing machine ID from VM UUID. Apr 13 23:29:41.085777 zram_generator::config[1060]: No configuration found. Apr 13 23:29:41.085795 systemd[1]: Populated /etc with preset unit settings. Apr 13 23:29:41.085806 systemd[1]: initrd-switch-root.service: Deactivated successfully. Apr 13 23:29:41.085817 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Apr 13 23:29:41.085827 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Apr 13 23:29:41.085839 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Apr 13 23:29:41.085850 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Apr 13 23:29:41.085860 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Apr 13 23:29:41.085871 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Apr 13 23:29:41.085920 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Apr 13 23:29:41.085951 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Apr 13 23:29:41.085963 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Apr 13 23:29:41.085974 systemd[1]: Created slice user.slice - User and Session Slice. Apr 13 23:29:41.085985 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 13 23:29:41.085996 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 13 23:29:41.086006 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Apr 13 23:29:41.086021 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Apr 13 23:29:41.086032 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Apr 13 23:29:41.086045 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 13 23:29:41.086056 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Apr 13 23:29:41.086067 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 13 23:29:41.086078 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Apr 13 23:29:41.086144 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Apr 13 23:29:41.086176 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Apr 13 23:29:41.086198 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Apr 13 23:29:41.086260 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 13 23:29:41.086292 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 13 23:29:41.086313 systemd[1]: Reached target slices.target - Slice Units. Apr 13 23:29:41.086334 systemd[1]: Reached target swap.target - Swaps. Apr 13 23:29:41.086355 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Apr 13 23:29:41.086366 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Apr 13 23:29:41.086378 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 13 23:29:41.086390 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 13 23:29:41.086401 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 13 23:29:41.086412 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Apr 13 23:29:41.086425 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Apr 13 23:29:41.086435 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Apr 13 23:29:41.086446 systemd[1]: Mounting media.mount - External Media Directory... Apr 13 23:29:41.086457 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 13 23:29:41.086468 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Apr 13 23:29:41.086480 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Apr 13 23:29:41.086491 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Apr 13 23:29:41.086503 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 13 23:29:41.086516 systemd[1]: Reached target machines.target - Containers. Apr 13 23:29:41.086526 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Apr 13 23:29:41.086537 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 13 23:29:41.086548 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 13 23:29:41.086559 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Apr 13 23:29:41.086571 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 13 23:29:41.086582 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 13 23:29:41.086593 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 13 23:29:41.086603 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Apr 13 23:29:41.086617 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 13 23:29:41.086628 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 13 23:29:41.086639 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Apr 13 23:29:41.086651 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Apr 13 23:29:41.086661 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Apr 13 23:29:41.086672 kernel: fuse: init (API version 7.39) Apr 13 23:29:41.086683 systemd[1]: Stopped systemd-fsck-usr.service. Apr 13 23:29:41.086695 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 13 23:29:41.086707 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 13 23:29:41.086718 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 13 23:29:41.086729 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Apr 13 23:29:41.086740 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 13 23:29:41.086751 systemd[1]: verity-setup.service: Deactivated successfully. Apr 13 23:29:41.086761 systemd[1]: Stopped verity-setup.service. Apr 13 23:29:41.086790 systemd-journald[1141]: Collecting audit messages is disabled. Apr 13 23:29:41.086813 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 13 23:29:41.086827 systemd-journald[1141]: Journal started Apr 13 23:29:41.086851 systemd-journald[1141]: Runtime Journal (/run/log/journal/31a5d7e2184c444aa82522e7bd4286d0) is 6.0M, max 48.3M, 42.2M free. Apr 13 23:29:40.587435 systemd[1]: Queued start job for default target multi-user.target. Apr 13 23:29:40.605661 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Apr 13 23:29:40.606765 systemd[1]: systemd-journald.service: Deactivated successfully. Apr 13 23:29:41.094929 kernel: ACPI: bus type drm_connector registered Apr 13 23:29:41.094979 systemd[1]: Started systemd-journald.service - Journal Service. Apr 13 23:29:41.099534 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Apr 13 23:29:41.101960 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Apr 13 23:29:41.104375 systemd[1]: Mounted media.mount - External Media Directory. Apr 13 23:29:41.106438 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Apr 13 23:29:41.109416 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Apr 13 23:29:41.112991 kernel: loop: module loaded Apr 13 23:29:41.113305 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Apr 13 23:29:41.115636 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Apr 13 23:29:41.118386 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 13 23:29:41.123781 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 13 23:29:41.124928 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Apr 13 23:29:41.127497 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 13 23:29:41.127665 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 13 23:29:41.130165 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 13 23:29:41.130653 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 13 23:29:41.133466 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 13 23:29:41.134265 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 13 23:29:41.137654 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 13 23:29:41.138350 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Apr 13 23:29:41.140471 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 13 23:29:41.140637 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 13 23:29:41.142955 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 13 23:29:41.145696 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 13 23:29:41.148094 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Apr 13 23:29:41.161709 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 13 23:29:41.203201 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Apr 13 23:29:41.206675 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Apr 13 23:29:41.208875 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 13 23:29:41.208941 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 13 23:29:41.211472 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Apr 13 23:29:41.224760 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Apr 13 23:29:41.230295 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Apr 13 23:29:41.233143 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 13 23:29:41.236005 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Apr 13 23:29:41.247072 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Apr 13 23:29:41.251154 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 13 23:29:41.263813 systemd-journald[1141]: Time spent on flushing to /var/log/journal/31a5d7e2184c444aa82522e7bd4286d0 is 17.236ms for 978 entries. Apr 13 23:29:41.263813 systemd-journald[1141]: System Journal (/var/log/journal/31a5d7e2184c444aa82522e7bd4286d0) is 8.0M, max 195.6M, 187.6M free. Apr 13 23:29:41.295381 systemd-journald[1141]: Received client request to flush runtime journal. Apr 13 23:29:41.258169 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Apr 13 23:29:41.261615 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 13 23:29:41.262977 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 13 23:29:41.273311 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Apr 13 23:29:41.280850 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 13 23:29:41.287007 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 13 23:29:41.293695 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Apr 13 23:29:41.297957 kernel: loop0: detected capacity change from 0 to 142488 Apr 13 23:29:41.309082 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Apr 13 23:29:41.314495 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Apr 13 23:29:41.319807 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Apr 13 23:29:41.324033 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Apr 13 23:29:41.330715 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Apr 13 23:29:41.349565 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Apr 13 23:29:41.359081 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Apr 13 23:29:41.362333 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 13 23:29:41.423500 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Apr 13 23:29:41.440929 systemd-tmpfiles[1175]: ACLs are not supported, ignoring. Apr 13 23:29:41.440942 systemd-tmpfiles[1175]: ACLs are not supported, ignoring. Apr 13 23:29:41.444271 udevadm[1191]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Apr 13 23:29:41.448440 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 13 23:29:41.449351 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Apr 13 23:29:41.452942 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 13 23:29:41.462269 kernel: loop1: detected capacity change from 0 to 219192 Apr 13 23:29:41.473808 systemd[1]: Starting systemd-sysusers.service - Create System Users... Apr 13 23:29:41.504075 systemd[1]: Finished systemd-sysusers.service - Create System Users. Apr 13 23:29:41.514190 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 13 23:29:41.517941 kernel: loop2: detected capacity change from 0 to 140768 Apr 13 23:29:41.534983 systemd-tmpfiles[1199]: ACLs are not supported, ignoring. Apr 13 23:29:41.535001 systemd-tmpfiles[1199]: ACLs are not supported, ignoring. Apr 13 23:29:41.541469 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 13 23:29:41.589940 kernel: loop3: detected capacity change from 0 to 142488 Apr 13 23:29:41.620969 kernel: loop4: detected capacity change from 0 to 219192 Apr 13 23:29:41.663940 kernel: loop5: detected capacity change from 0 to 140768 Apr 13 23:29:41.729020 (sd-merge)[1203]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Apr 13 23:29:41.729522 (sd-merge)[1203]: Merged extensions into '/usr'. Apr 13 23:29:41.735734 systemd[1]: Reloading requested from client PID 1174 ('systemd-sysext') (unit systemd-sysext.service)... Apr 13 23:29:41.735760 systemd[1]: Reloading... Apr 13 23:29:41.810026 zram_generator::config[1235]: No configuration found. Apr 13 23:29:41.940331 ldconfig[1169]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 13 23:29:41.944875 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 13 23:29:42.037289 systemd[1]: Reloading finished in 300 ms. Apr 13 23:29:42.083817 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Apr 13 23:29:42.087451 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Apr 13 23:29:42.100431 systemd[1]: Starting ensure-sysext.service... Apr 13 23:29:42.104114 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 13 23:29:42.109637 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Apr 13 23:29:42.124544 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 13 23:29:42.127351 systemd[1]: Reloading requested from client PID 1266 ('systemctl') (unit ensure-sysext.service)... Apr 13 23:29:42.127445 systemd[1]: Reloading... Apr 13 23:29:42.129329 systemd-tmpfiles[1267]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 13 23:29:42.129877 systemd-tmpfiles[1267]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Apr 13 23:29:42.130883 systemd-tmpfiles[1267]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 13 23:29:42.131386 systemd-tmpfiles[1267]: ACLs are not supported, ignoring. Apr 13 23:29:42.131462 systemd-tmpfiles[1267]: ACLs are not supported, ignoring. Apr 13 23:29:42.134766 systemd-tmpfiles[1267]: Detected autofs mount point /boot during canonicalization of boot. Apr 13 23:29:42.134774 systemd-tmpfiles[1267]: Skipping /boot Apr 13 23:29:42.143697 systemd-tmpfiles[1267]: Detected autofs mount point /boot during canonicalization of boot. Apr 13 23:29:42.143811 systemd-tmpfiles[1267]: Skipping /boot Apr 13 23:29:42.160858 systemd-udevd[1270]: Using default interface naming scheme 'v255'. Apr 13 23:29:42.251118 zram_generator::config[1295]: No configuration found. Apr 13 23:29:42.303797 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 31 scanned by (udev-worker) (1321) Apr 13 23:29:42.363995 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Apr 13 23:29:42.376427 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Apr 13 23:29:42.376647 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Apr 13 23:29:42.376733 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Apr 13 23:29:42.376742 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Apr 13 23:29:42.378501 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Apr 13 23:29:42.380814 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 13 23:29:42.385939 kernel: ACPI: button: Power Button [PWRF] Apr 13 23:29:42.422685 kernel: mousedev: PS/2 mouse device common for all mice Apr 13 23:29:42.441875 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Apr 13 23:29:42.442541 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Apr 13 23:29:42.447794 systemd[1]: Reloading finished in 320 ms. Apr 13 23:29:42.589409 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 13 23:29:42.616256 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 13 23:29:42.670151 systemd[1]: Finished ensure-sysext.service. Apr 13 23:29:42.672315 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Apr 13 23:29:42.688498 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 13 23:29:42.700192 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 13 23:29:42.704932 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Apr 13 23:29:42.708093 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 13 23:29:42.709845 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Apr 13 23:29:42.716457 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 13 23:29:42.722100 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 13 23:29:42.729156 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 13 23:29:42.733026 lvm[1369]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 13 23:29:42.737497 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 13 23:29:42.740354 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 13 23:29:42.742314 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Apr 13 23:29:42.754074 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Apr 13 23:29:42.762717 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 13 23:29:42.832544 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 13 23:29:42.837207 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Apr 13 23:29:42.837575 augenrules[1391]: No rules Apr 13 23:29:42.841870 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Apr 13 23:29:42.848194 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 13 23:29:42.851292 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 13 23:29:42.852805 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 13 23:29:42.857194 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Apr 13 23:29:42.860842 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 13 23:29:42.861081 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 13 23:29:42.864074 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 13 23:29:42.864334 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 13 23:29:42.867768 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 13 23:29:42.868315 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 13 23:29:42.871167 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 13 23:29:42.871300 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 13 23:29:42.873873 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Apr 13 23:29:42.876541 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Apr 13 23:29:42.879245 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Apr 13 23:29:42.892287 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 13 23:29:42.901369 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Apr 13 23:29:42.902450 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 13 23:29:42.902537 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 13 23:29:42.904808 systemd[1]: Starting systemd-update-done.service - Update is Completed... Apr 13 23:29:42.909430 lvm[1411]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 13 23:29:42.910140 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Apr 13 23:29:42.911625 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 13 23:29:42.912169 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Apr 13 23:29:42.929626 systemd[1]: Finished systemd-update-done.service - Update is Completed. Apr 13 23:29:42.936757 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 13 23:29:42.952193 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Apr 13 23:29:42.967307 systemd[1]: Started systemd-userdbd.service - User Database Manager. Apr 13 23:29:43.033311 systemd-networkd[1386]: lo: Link UP Apr 13 23:29:43.033330 systemd-networkd[1386]: lo: Gained carrier Apr 13 23:29:43.035370 systemd-networkd[1386]: Enumeration completed Apr 13 23:29:43.035596 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 13 23:29:43.036002 systemd-networkd[1386]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 13 23:29:43.036006 systemd-networkd[1386]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 13 23:29:43.037060 systemd-networkd[1386]: eth0: Link UP Apr 13 23:29:43.037077 systemd-networkd[1386]: eth0: Gained carrier Apr 13 23:29:43.037094 systemd-networkd[1386]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 13 23:29:43.047441 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Apr 13 23:29:43.050543 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Apr 13 23:29:43.054783 systemd[1]: Reached target time-set.target - System Time Set. Apr 13 23:29:43.058814 systemd-resolved[1389]: Positive Trust Anchors: Apr 13 23:29:43.059186 systemd-resolved[1389]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 13 23:29:43.059248 systemd-resolved[1389]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 13 23:29:43.060040 systemd-networkd[1386]: eth0: DHCPv4 address 10.0.0.139/16, gateway 10.0.0.1 acquired from 10.0.0.1 Apr 13 23:29:43.061253 systemd-timesyncd[1395]: Network configuration changed, trying to establish connection. Apr 13 23:29:43.721540 systemd-timesyncd[1395]: Contacted time server 10.0.0.1:123 (10.0.0.1). Apr 13 23:29:43.721581 systemd-timesyncd[1395]: Initial clock synchronization to Mon 2026-04-13 23:29:43.721361 UTC. Apr 13 23:29:43.736180 systemd-resolved[1389]: Defaulting to hostname 'linux'. Apr 13 23:29:43.763401 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 13 23:29:43.765664 systemd[1]: Reached target network.target - Network. Apr 13 23:29:43.767421 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 13 23:29:43.769930 systemd[1]: Reached target sysinit.target - System Initialization. Apr 13 23:29:43.772220 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Apr 13 23:29:43.775053 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Apr 13 23:29:43.777781 systemd[1]: Started logrotate.timer - Daily rotation of log files. Apr 13 23:29:43.780475 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Apr 13 23:29:43.783549 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Apr 13 23:29:43.786471 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 13 23:29:43.786523 systemd[1]: Reached target paths.target - Path Units. Apr 13 23:29:43.788644 systemd[1]: Reached target timers.target - Timer Units. Apr 13 23:29:43.791542 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Apr 13 23:29:43.795507 systemd[1]: Starting docker.socket - Docker Socket for the API... Apr 13 23:29:43.811726 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Apr 13 23:29:43.814864 systemd[1]: Listening on docker.socket - Docker Socket for the API. Apr 13 23:29:43.817284 systemd[1]: Reached target sockets.target - Socket Units. Apr 13 23:29:43.818916 systemd[1]: Reached target basic.target - Basic System. Apr 13 23:29:43.820994 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Apr 13 23:29:43.821030 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Apr 13 23:29:43.823017 systemd[1]: Starting containerd.service - containerd container runtime... Apr 13 23:29:43.826953 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Apr 13 23:29:43.832008 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Apr 13 23:29:43.835597 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Apr 13 23:29:43.836854 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Apr 13 23:29:43.838621 jq[1434]: false Apr 13 23:29:43.840014 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Apr 13 23:29:43.842707 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Apr 13 23:29:43.848175 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Apr 13 23:29:43.853960 systemd[1]: Starting systemd-logind.service - User Login Management... Apr 13 23:29:43.856158 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Apr 13 23:29:43.856721 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Apr 13 23:29:43.857625 systemd[1]: Starting update-engine.service - Update Engine... Apr 13 23:29:43.862204 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Apr 13 23:29:43.867707 dbus-daemon[1433]: [system] SELinux support is enabled Apr 13 23:29:43.872203 systemd[1]: Started dbus.service - D-Bus System Message Bus. Apr 13 23:29:43.879706 extend-filesystems[1435]: Found loop3 Apr 13 23:29:43.879706 extend-filesystems[1435]: Found loop4 Apr 13 23:29:43.879706 extend-filesystems[1435]: Found loop5 Apr 13 23:29:43.879706 extend-filesystems[1435]: Found sr0 Apr 13 23:29:43.879706 extend-filesystems[1435]: Found vda Apr 13 23:29:43.879706 extend-filesystems[1435]: Found vda1 Apr 13 23:29:43.879706 extend-filesystems[1435]: Found vda2 Apr 13 23:29:43.879706 extend-filesystems[1435]: Found vda3 Apr 13 23:29:43.879706 extend-filesystems[1435]: Found usr Apr 13 23:29:43.879706 extend-filesystems[1435]: Found vda4 Apr 13 23:29:43.879706 extend-filesystems[1435]: Found vda6 Apr 13 23:29:43.879706 extend-filesystems[1435]: Found vda7 Apr 13 23:29:43.879706 extend-filesystems[1435]: Found vda9 Apr 13 23:29:43.879706 extend-filesystems[1435]: Checking size of /dev/vda9 Apr 13 23:29:43.932979 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Apr 13 23:29:43.933045 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 31 scanned by (udev-worker) (1321) Apr 13 23:29:43.933061 jq[1444]: true Apr 13 23:29:43.881691 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 13 23:29:43.933616 extend-filesystems[1435]: Resized partition /dev/vda9 Apr 13 23:29:43.939991 update_engine[1443]: I20260413 23:29:43.886054 1443 main.cc:92] Flatcar Update Engine starting Apr 13 23:29:43.939991 update_engine[1443]: I20260413 23:29:43.892293 1443 update_check_scheduler.cc:74] Next update check in 5m22s Apr 13 23:29:43.881925 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Apr 13 23:29:43.940696 extend-filesystems[1457]: resize2fs 1.47.1 (20-May-2024) Apr 13 23:29:43.882218 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 13 23:29:43.945060 jq[1454]: true Apr 13 23:29:43.882354 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Apr 13 23:29:43.896065 systemd[1]: motdgen.service: Deactivated successfully. Apr 13 23:29:43.896343 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Apr 13 23:29:43.903514 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 13 23:29:43.903543 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Apr 13 23:29:43.904427 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 13 23:29:43.904441 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Apr 13 23:29:43.909901 systemd[1]: Started update-engine.service - Update Engine. Apr 13 23:29:43.932047 systemd[1]: Started locksmithd.service - Cluster reboot manager. Apr 13 23:29:43.942890 (ntainerd)[1460]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Apr 13 23:29:43.961870 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Apr 13 23:29:43.980586 extend-filesystems[1457]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Apr 13 23:29:43.980586 extend-filesystems[1457]: old_desc_blocks = 1, new_desc_blocks = 1 Apr 13 23:29:43.980586 extend-filesystems[1457]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Apr 13 23:29:43.988058 extend-filesystems[1435]: Resized filesystem in /dev/vda9 Apr 13 23:29:43.987067 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 13 23:29:43.987493 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Apr 13 23:29:43.988607 systemd-logind[1442]: Watching system buttons on /dev/input/event2 (Power Button) Apr 13 23:29:43.988626 systemd-logind[1442]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Apr 13 23:29:43.992430 systemd-logind[1442]: New seat seat0. Apr 13 23:29:43.997717 systemd[1]: Started systemd-logind.service - User Login Management. Apr 13 23:29:44.005888 locksmithd[1464]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 13 23:29:44.066262 bash[1484]: Updated "/home/core/.ssh/authorized_keys" Apr 13 23:29:44.070690 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Apr 13 23:29:44.077421 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Apr 13 23:29:44.323537 containerd[1460]: time="2026-04-13T23:29:44.323106709Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Apr 13 23:29:44.360734 containerd[1460]: time="2026-04-13T23:29:44.360654710Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Apr 13 23:29:44.371631 containerd[1460]: time="2026-04-13T23:29:44.371347581Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Apr 13 23:29:44.371631 containerd[1460]: time="2026-04-13T23:29:44.371426917Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Apr 13 23:29:44.371631 containerd[1460]: time="2026-04-13T23:29:44.371450056Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Apr 13 23:29:44.371930 containerd[1460]: time="2026-04-13T23:29:44.371749060Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Apr 13 23:29:44.372344 containerd[1460]: time="2026-04-13T23:29:44.372023381Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Apr 13 23:29:44.372459 containerd[1460]: time="2026-04-13T23:29:44.372427106Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Apr 13 23:29:44.372486 containerd[1460]: time="2026-04-13T23:29:44.372461449Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Apr 13 23:29:44.375193 containerd[1460]: time="2026-04-13T23:29:44.373686703Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 13 23:29:44.375193 containerd[1460]: time="2026-04-13T23:29:44.373724740Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Apr 13 23:29:44.375193 containerd[1460]: time="2026-04-13T23:29:44.373743587Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Apr 13 23:29:44.375193 containerd[1460]: time="2026-04-13T23:29:44.373754377Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Apr 13 23:29:44.375193 containerd[1460]: time="2026-04-13T23:29:44.374069757Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Apr 13 23:29:44.375193 containerd[1460]: time="2026-04-13T23:29:44.374313488Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Apr 13 23:29:44.375193 containerd[1460]: time="2026-04-13T23:29:44.374715084Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 13 23:29:44.375193 containerd[1460]: time="2026-04-13T23:29:44.374735972Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Apr 13 23:29:44.375193 containerd[1460]: time="2026-04-13T23:29:44.374879641Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Apr 13 23:29:44.375193 containerd[1460]: time="2026-04-13T23:29:44.374921438Z" level=info msg="metadata content store policy set" policy=shared Apr 13 23:29:44.387048 containerd[1460]: time="2026-04-13T23:29:44.386957566Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Apr 13 23:29:44.387048 containerd[1460]: time="2026-04-13T23:29:44.387046126Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Apr 13 23:29:44.387048 containerd[1460]: time="2026-04-13T23:29:44.387071979Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Apr 13 23:29:44.387267 containerd[1460]: time="2026-04-13T23:29:44.387090933Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Apr 13 23:29:44.387267 containerd[1460]: time="2026-04-13T23:29:44.387111559Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Apr 13 23:29:44.387329 containerd[1460]: time="2026-04-13T23:29:44.387315606Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Apr 13 23:29:44.387711 containerd[1460]: time="2026-04-13T23:29:44.387567360Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Apr 13 23:29:44.387971 containerd[1460]: time="2026-04-13T23:29:44.387923550Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Apr 13 23:29:44.387971 containerd[1460]: time="2026-04-13T23:29:44.387954726Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Apr 13 23:29:44.388022 containerd[1460]: time="2026-04-13T23:29:44.387969524Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Apr 13 23:29:44.388022 containerd[1460]: time="2026-04-13T23:29:44.387987359Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Apr 13 23:29:44.388022 containerd[1460]: time="2026-04-13T23:29:44.388001718Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Apr 13 23:29:44.388022 containerd[1460]: time="2026-04-13T23:29:44.388016014Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Apr 13 23:29:44.388109 containerd[1460]: time="2026-04-13T23:29:44.388033482Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Apr 13 23:29:44.388109 containerd[1460]: time="2026-04-13T23:29:44.388050760Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Apr 13 23:29:44.388109 containerd[1460]: time="2026-04-13T23:29:44.388073884Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Apr 13 23:29:44.388109 containerd[1460]: time="2026-04-13T23:29:44.388088516Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Apr 13 23:29:44.388109 containerd[1460]: time="2026-04-13T23:29:44.388101410Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Apr 13 23:29:44.388226 containerd[1460]: time="2026-04-13T23:29:44.388126322Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Apr 13 23:29:44.388226 containerd[1460]: time="2026-04-13T23:29:44.388164439Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Apr 13 23:29:44.388226 containerd[1460]: time="2026-04-13T23:29:44.388178143Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Apr 13 23:29:44.388226 containerd[1460]: time="2026-04-13T23:29:44.388193189Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Apr 13 23:29:44.388226 containerd[1460]: time="2026-04-13T23:29:44.388209216Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Apr 13 23:29:44.388363 containerd[1460]: time="2026-04-13T23:29:44.388227556Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Apr 13 23:29:44.388363 containerd[1460]: time="2026-04-13T23:29:44.388241521Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Apr 13 23:29:44.388363 containerd[1460]: time="2026-04-13T23:29:44.388255858Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Apr 13 23:29:44.388363 containerd[1460]: time="2026-04-13T23:29:44.388271058Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Apr 13 23:29:44.388363 containerd[1460]: time="2026-04-13T23:29:44.388287498Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Apr 13 23:29:44.388363 containerd[1460]: time="2026-04-13T23:29:44.388301820Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Apr 13 23:29:44.388363 containerd[1460]: time="2026-04-13T23:29:44.388316569Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Apr 13 23:29:44.388363 containerd[1460]: time="2026-04-13T23:29:44.388331420Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Apr 13 23:29:44.388363 containerd[1460]: time="2026-04-13T23:29:44.388349003Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Apr 13 23:29:44.388535 containerd[1460]: time="2026-04-13T23:29:44.388371822Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Apr 13 23:29:44.388535 containerd[1460]: time="2026-04-13T23:29:44.388385320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Apr 13 23:29:44.388535 containerd[1460]: time="2026-04-13T23:29:44.388398342Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Apr 13 23:29:44.388535 containerd[1460]: time="2026-04-13T23:29:44.388447036Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Apr 13 23:29:44.388535 containerd[1460]: time="2026-04-13T23:29:44.388470550Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Apr 13 23:29:44.388535 containerd[1460]: time="2026-04-13T23:29:44.388481991Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Apr 13 23:29:44.388535 containerd[1460]: time="2026-04-13T23:29:44.388495536Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Apr 13 23:29:44.388535 containerd[1460]: time="2026-04-13T23:29:44.388505679Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Apr 13 23:29:44.388535 containerd[1460]: time="2026-04-13T23:29:44.388522085Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Apr 13 23:29:44.388535 containerd[1460]: time="2026-04-13T23:29:44.388533571Z" level=info msg="NRI interface is disabled by configuration." Apr 13 23:29:44.388739 containerd[1460]: time="2026-04-13T23:29:44.388545198Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Apr 13 23:29:44.389180 containerd[1460]: time="2026-04-13T23:29:44.389025571Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Apr 13 23:29:44.389180 containerd[1460]: time="2026-04-13T23:29:44.389146069Z" level=info msg="Connect containerd service" Apr 13 23:29:44.389414 containerd[1460]: time="2026-04-13T23:29:44.389208656Z" level=info msg="using legacy CRI server" Apr 13 23:29:44.389414 containerd[1460]: time="2026-04-13T23:29:44.389217084Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Apr 13 23:29:44.389414 containerd[1460]: time="2026-04-13T23:29:44.389365225Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Apr 13 23:29:44.390349 containerd[1460]: time="2026-04-13T23:29:44.390291496Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 13 23:29:44.390550 containerd[1460]: time="2026-04-13T23:29:44.390510666Z" level=info msg="Start subscribing containerd event" Apr 13 23:29:44.390726 containerd[1460]: time="2026-04-13T23:29:44.390651253Z" level=info msg="Start recovering state" Apr 13 23:29:44.390726 containerd[1460]: time="2026-04-13T23:29:44.390674112Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 13 23:29:44.390726 containerd[1460]: time="2026-04-13T23:29:44.390717822Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 13 23:29:44.390863 containerd[1460]: time="2026-04-13T23:29:44.390811986Z" level=info msg="Start event monitor" Apr 13 23:29:44.390863 containerd[1460]: time="2026-04-13T23:29:44.390828223Z" level=info msg="Start snapshots syncer" Apr 13 23:29:44.390863 containerd[1460]: time="2026-04-13T23:29:44.390836550Z" level=info msg="Start cni network conf syncer for default" Apr 13 23:29:44.390863 containerd[1460]: time="2026-04-13T23:29:44.390842896Z" level=info msg="Start streaming server" Apr 13 23:29:44.390952 containerd[1460]: time="2026-04-13T23:29:44.390896417Z" level=info msg="containerd successfully booted in 0.070417s" Apr 13 23:29:44.391102 systemd[1]: Started containerd.service - containerd container runtime. Apr 13 23:29:44.431487 sshd_keygen[1463]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 13 23:29:44.463295 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Apr 13 23:29:44.482712 systemd[1]: Starting issuegen.service - Generate /run/issue... Apr 13 23:29:44.497374 systemd[1]: issuegen.service: Deactivated successfully. Apr 13 23:29:44.497594 systemd[1]: Finished issuegen.service - Generate /run/issue. Apr 13 23:29:44.513443 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Apr 13 23:29:44.586934 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Apr 13 23:29:44.604525 systemd[1]: Started getty@tty1.service - Getty on tty1. Apr 13 23:29:44.608599 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Apr 13 23:29:44.612266 systemd[1]: Reached target getty.target - Login Prompts. Apr 13 23:29:45.090453 systemd-networkd[1386]: eth0: Gained IPv6LL Apr 13 23:29:45.101362 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Apr 13 23:29:45.106053 systemd[1]: Reached target network-online.target - Network is Online. Apr 13 23:29:45.121440 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Apr 13 23:29:45.175230 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 23:29:45.182715 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Apr 13 23:29:45.216578 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Apr 13 23:29:45.220024 systemd[1]: coreos-metadata.service: Deactivated successfully. Apr 13 23:29:45.220449 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Apr 13 23:29:45.224901 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Apr 13 23:29:46.281235 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 23:29:46.284855 systemd[1]: Reached target multi-user.target - Multi-User System. Apr 13 23:29:46.287201 (kubelet)[1538]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 13 23:29:46.287409 systemd[1]: Startup finished in 1.227s (kernel) + 8.007s (initrd) + 5.858s (userspace) = 15.093s. Apr 13 23:29:46.844351 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Apr 13 23:29:46.846283 systemd[1]: Started sshd@0-10.0.0.139:22-10.0.0.1:49614.service - OpenSSH per-connection server daemon (10.0.0.1:49614). Apr 13 23:29:46.913263 sshd[1550]: Accepted publickey for core from 10.0.0.1 port 49614 ssh2: RSA SHA256:bOz6LmPSBJV0R+gY2r5G2pYVoFmOMJji6gPwPENABkI Apr 13 23:29:46.915750 sshd[1550]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 23:29:46.977411 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Apr 13 23:29:46.979133 kubelet[1538]: E0413 23:29:46.979056 1538 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 13 23:29:46.988854 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Apr 13 23:29:46.989272 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 13 23:29:46.989422 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 13 23:29:46.989946 systemd[1]: kubelet.service: Consumed 1.194s CPU time. Apr 13 23:29:46.995943 systemd-logind[1442]: New session 1 of user core. Apr 13 23:29:47.005473 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Apr 13 23:29:47.021747 systemd[1]: Starting user@500.service - User Manager for UID 500... Apr 13 23:29:47.024717 (systemd)[1555]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Apr 13 23:29:47.207911 systemd[1555]: Queued start job for default target default.target. Apr 13 23:29:47.218424 systemd[1555]: Created slice app.slice - User Application Slice. Apr 13 23:29:47.218597 systemd[1555]: Reached target paths.target - Paths. Apr 13 23:29:47.218616 systemd[1555]: Reached target timers.target - Timers. Apr 13 23:29:47.221275 systemd[1555]: Starting dbus.socket - D-Bus User Message Bus Socket... Apr 13 23:29:47.250899 systemd[1555]: Listening on dbus.socket - D-Bus User Message Bus Socket. Apr 13 23:29:47.251034 systemd[1555]: Reached target sockets.target - Sockets. Apr 13 23:29:47.251049 systemd[1555]: Reached target basic.target - Basic System. Apr 13 23:29:47.251104 systemd[1555]: Reached target default.target - Main User Target. Apr 13 23:29:47.251131 systemd[1555]: Startup finished in 216ms. Apr 13 23:29:47.251240 systemd[1]: Started user@500.service - User Manager for UID 500. Apr 13 23:29:47.252311 systemd[1]: Started session-1.scope - Session 1 of User core. Apr 13 23:29:47.325126 systemd[1]: Started sshd@1-10.0.0.139:22-10.0.0.1:49630.service - OpenSSH per-connection server daemon (10.0.0.1:49630). Apr 13 23:29:47.376399 sshd[1566]: Accepted publickey for core from 10.0.0.1 port 49630 ssh2: RSA SHA256:bOz6LmPSBJV0R+gY2r5G2pYVoFmOMJji6gPwPENABkI Apr 13 23:29:47.381656 sshd[1566]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 23:29:47.394077 systemd-logind[1442]: New session 2 of user core. Apr 13 23:29:47.408621 systemd[1]: Started session-2.scope - Session 2 of User core. Apr 13 23:29:47.477425 sshd[1566]: pam_unix(sshd:session): session closed for user core Apr 13 23:29:47.494252 systemd[1]: sshd@1-10.0.0.139:22-10.0.0.1:49630.service: Deactivated successfully. Apr 13 23:29:47.495589 systemd[1]: session-2.scope: Deactivated successfully. Apr 13 23:29:47.497450 systemd-logind[1442]: Session 2 logged out. Waiting for processes to exit. Apr 13 23:29:47.498778 systemd[1]: Started sshd@2-10.0.0.139:22-10.0.0.1:49646.service - OpenSSH per-connection server daemon (10.0.0.1:49646). Apr 13 23:29:47.500240 systemd-logind[1442]: Removed session 2. Apr 13 23:29:47.556860 sshd[1573]: Accepted publickey for core from 10.0.0.1 port 49646 ssh2: RSA SHA256:bOz6LmPSBJV0R+gY2r5G2pYVoFmOMJji6gPwPENABkI Apr 13 23:29:47.558361 sshd[1573]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 23:29:47.564718 systemd-logind[1442]: New session 3 of user core. Apr 13 23:29:47.580385 systemd[1]: Started session-3.scope - Session 3 of User core. Apr 13 23:29:47.677074 sshd[1573]: pam_unix(sshd:session): session closed for user core Apr 13 23:29:47.685818 systemd[1]: sshd@2-10.0.0.139:22-10.0.0.1:49646.service: Deactivated successfully. Apr 13 23:29:47.687788 systemd[1]: session-3.scope: Deactivated successfully. Apr 13 23:29:47.689230 systemd-logind[1442]: Session 3 logged out. Waiting for processes to exit. Apr 13 23:29:47.690328 systemd[1]: Started sshd@3-10.0.0.139:22-10.0.0.1:49656.service - OpenSSH per-connection server daemon (10.0.0.1:49656). Apr 13 23:29:47.691447 systemd-logind[1442]: Removed session 3. Apr 13 23:29:47.742543 sshd[1580]: Accepted publickey for core from 10.0.0.1 port 49656 ssh2: RSA SHA256:bOz6LmPSBJV0R+gY2r5G2pYVoFmOMJji6gPwPENABkI Apr 13 23:29:47.744328 sshd[1580]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 23:29:47.750049 systemd-logind[1442]: New session 4 of user core. Apr 13 23:29:47.769627 systemd[1]: Started session-4.scope - Session 4 of User core. Apr 13 23:29:47.886058 sshd[1580]: pam_unix(sshd:session): session closed for user core Apr 13 23:29:47.902337 systemd[1]: sshd@3-10.0.0.139:22-10.0.0.1:49656.service: Deactivated successfully. Apr 13 23:29:47.905350 systemd[1]: session-4.scope: Deactivated successfully. Apr 13 23:29:47.907403 systemd-logind[1442]: Session 4 logged out. Waiting for processes to exit. Apr 13 23:29:47.921735 systemd[1]: Started sshd@4-10.0.0.139:22-10.0.0.1:49670.service - OpenSSH per-connection server daemon (10.0.0.1:49670). Apr 13 23:29:47.922676 systemd-logind[1442]: Removed session 4. Apr 13 23:29:47.976770 sshd[1587]: Accepted publickey for core from 10.0.0.1 port 49670 ssh2: RSA SHA256:bOz6LmPSBJV0R+gY2r5G2pYVoFmOMJji6gPwPENABkI Apr 13 23:29:47.979345 sshd[1587]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 23:29:47.988184 systemd-logind[1442]: New session 5 of user core. Apr 13 23:29:48.004305 systemd[1]: Started session-5.scope - Session 5 of User core. Apr 13 23:29:48.086203 sudo[1590]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Apr 13 23:29:48.086526 sudo[1590]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 13 23:29:48.107916 sudo[1590]: pam_unix(sudo:session): session closed for user root Apr 13 23:29:48.110940 sshd[1587]: pam_unix(sshd:session): session closed for user core Apr 13 23:29:48.158235 systemd[1]: sshd@4-10.0.0.139:22-10.0.0.1:49670.service: Deactivated successfully. Apr 13 23:29:48.160409 systemd[1]: session-5.scope: Deactivated successfully. Apr 13 23:29:48.162060 systemd-logind[1442]: Session 5 logged out. Waiting for processes to exit. Apr 13 23:29:48.175874 systemd[1]: Started sshd@5-10.0.0.139:22-10.0.0.1:49684.service - OpenSSH per-connection server daemon (10.0.0.1:49684). Apr 13 23:29:48.178289 systemd-logind[1442]: Removed session 5. Apr 13 23:29:48.213378 sshd[1595]: Accepted publickey for core from 10.0.0.1 port 49684 ssh2: RSA SHA256:bOz6LmPSBJV0R+gY2r5G2pYVoFmOMJji6gPwPENABkI Apr 13 23:29:48.214542 sshd[1595]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 23:29:48.219422 systemd-logind[1442]: New session 6 of user core. Apr 13 23:29:48.229398 systemd[1]: Started session-6.scope - Session 6 of User core. Apr 13 23:29:48.285758 sudo[1599]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Apr 13 23:29:48.286573 sudo[1599]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 13 23:29:48.292133 sudo[1599]: pam_unix(sudo:session): session closed for user root Apr 13 23:29:48.299770 sudo[1598]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Apr 13 23:29:48.300096 sudo[1598]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 13 23:29:48.384133 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Apr 13 23:29:48.385421 auditctl[1602]: No rules Apr 13 23:29:48.385873 systemd[1]: audit-rules.service: Deactivated successfully. Apr 13 23:29:48.386105 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Apr 13 23:29:48.389464 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 13 23:29:48.432047 augenrules[1620]: No rules Apr 13 23:29:48.433589 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 13 23:29:48.435010 sudo[1598]: pam_unix(sudo:session): session closed for user root Apr 13 23:29:48.438254 sshd[1595]: pam_unix(sshd:session): session closed for user core Apr 13 23:29:48.454920 systemd[1]: sshd@5-10.0.0.139:22-10.0.0.1:49684.service: Deactivated successfully. Apr 13 23:29:48.456368 systemd[1]: session-6.scope: Deactivated successfully. Apr 13 23:29:48.458045 systemd-logind[1442]: Session 6 logged out. Waiting for processes to exit. Apr 13 23:29:48.467750 systemd[1]: Started sshd@6-10.0.0.139:22-10.0.0.1:49692.service - OpenSSH per-connection server daemon (10.0.0.1:49692). Apr 13 23:29:48.469727 systemd-logind[1442]: Removed session 6. Apr 13 23:29:48.513280 sshd[1628]: Accepted publickey for core from 10.0.0.1 port 49692 ssh2: RSA SHA256:bOz6LmPSBJV0R+gY2r5G2pYVoFmOMJji6gPwPENABkI Apr 13 23:29:48.515539 sshd[1628]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 23:29:48.523537 systemd-logind[1442]: New session 7 of user core. Apr 13 23:29:48.536633 systemd[1]: Started session-7.scope - Session 7 of User core. Apr 13 23:29:48.610057 sudo[1631]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 13 23:29:48.610851 sudo[1631]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 13 23:29:48.688944 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Apr 13 23:29:48.717614 systemd[1]: coreos-metadata.service: Deactivated successfully. Apr 13 23:29:48.717871 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Apr 13 23:29:54.113559 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 23:29:54.114255 systemd[1]: kubelet.service: Consumed 1.194s CPU time. Apr 13 23:29:54.129665 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 23:29:54.195576 systemd[1]: Reloading requested from client PID 1672 ('systemctl') (unit session-7.scope)... Apr 13 23:29:54.195607 systemd[1]: Reloading... Apr 13 23:29:54.290109 zram_generator::config[1713]: No configuration found. Apr 13 23:29:54.482449 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 13 23:29:54.585476 systemd[1]: Reloading finished in 389 ms. Apr 13 23:29:54.692224 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 23:29:54.696660 systemd[1]: kubelet.service: Deactivated successfully. Apr 13 23:29:54.696971 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 23:29:54.709438 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 23:29:54.947709 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 23:29:54.954130 (kubelet)[1759]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 13 23:29:55.090449 kubelet[1759]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 13 23:29:55.090449 kubelet[1759]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 13 23:29:55.090873 kubelet[1759]: I0413 23:29:55.090573 1759 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 13 23:29:55.942714 kubelet[1759]: I0413 23:29:55.942649 1759 server.go:529] "Kubelet version" kubeletVersion="v1.34.4" Apr 13 23:29:55.942714 kubelet[1759]: I0413 23:29:55.942683 1759 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 13 23:29:55.942714 kubelet[1759]: I0413 23:29:55.942715 1759 watchdog_linux.go:95] "Systemd watchdog is not enabled" Apr 13 23:29:55.942714 kubelet[1759]: I0413 23:29:55.942723 1759 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 13 23:29:55.943007 kubelet[1759]: I0413 23:29:55.942960 1759 server.go:956] "Client rotation is on, will bootstrap in background" Apr 13 23:29:55.948009 kubelet[1759]: I0413 23:29:55.947933 1759 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 13 23:29:55.956864 kubelet[1759]: E0413 23:29:55.956237 1759 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 13 23:29:55.956864 kubelet[1759]: I0413 23:29:55.956305 1759 server.go:1400] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Apr 13 23:29:55.961741 kubelet[1759]: I0413 23:29:55.961691 1759 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Apr 13 23:29:55.964557 kubelet[1759]: I0413 23:29:55.964340 1759 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 13 23:29:55.965080 kubelet[1759]: I0413 23:29:55.964582 1759 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"10.0.0.139","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 13 23:29:55.965080 kubelet[1759]: I0413 23:29:55.965074 1759 topology_manager.go:138] "Creating topology manager with none policy" Apr 13 23:29:55.965080 kubelet[1759]: I0413 23:29:55.965090 1759 container_manager_linux.go:306] "Creating device plugin manager" Apr 13 23:29:55.965345 kubelet[1759]: I0413 23:29:55.965248 1759 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Apr 13 23:29:55.976052 kubelet[1759]: I0413 23:29:55.975976 1759 state_mem.go:36] "Initialized new in-memory state store" Apr 13 23:29:55.976834 kubelet[1759]: I0413 23:29:55.976678 1759 kubelet.go:475] "Attempting to sync node with API server" Apr 13 23:29:55.976965 kubelet[1759]: I0413 23:29:55.976844 1759 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 13 23:29:55.977380 kubelet[1759]: I0413 23:29:55.977324 1759 kubelet.go:387] "Adding apiserver pod source" Apr 13 23:29:55.977380 kubelet[1759]: I0413 23:29:55.977357 1759 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 13 23:29:55.977600 kubelet[1759]: E0413 23:29:55.977540 1759 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 13 23:29:55.977702 kubelet[1759]: E0413 23:29:55.977605 1759 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 13 23:29:55.980165 kubelet[1759]: I0413 23:29:55.980042 1759 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 13 23:29:55.981069 kubelet[1759]: I0413 23:29:55.980900 1759 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 13 23:29:55.981282 kubelet[1759]: I0413 23:29:55.981135 1759 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Apr 13 23:29:55.981319 kubelet[1759]: W0413 23:29:55.981300 1759 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 13 23:29:55.986149 kubelet[1759]: I0413 23:29:55.986105 1759 server.go:1262] "Started kubelet" Apr 13 23:29:55.987069 kubelet[1759]: I0413 23:29:55.987006 1759 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 13 23:29:55.993881 kubelet[1759]: I0413 23:29:55.992463 1759 server.go:310] "Adding debug handlers to kubelet server" Apr 13 23:29:55.993881 kubelet[1759]: I0413 23:29:55.993128 1759 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 13 23:29:55.993881 kubelet[1759]: I0413 23:29:55.993214 1759 server_v1.go:49] "podresources" method="list" useActivePods=true Apr 13 23:29:55.993881 kubelet[1759]: I0413 23:29:55.993685 1759 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 13 23:29:55.994933 kubelet[1759]: I0413 23:29:55.994897 1759 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 13 23:29:55.998670 kubelet[1759]: E0413 23:29:55.998644 1759 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 13 23:29:55.998939 kubelet[1759]: I0413 23:29:55.998902 1759 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 13 23:29:56.000079 kubelet[1759]: E0413 23:29:56.000066 1759 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"10.0.0.139\" not found" Apr 13 23:29:56.000502 kubelet[1759]: I0413 23:29:56.000458 1759 volume_manager.go:313] "Starting Kubelet Volume Manager" Apr 13 23:29:56.000919 kubelet[1759]: I0413 23:29:56.000906 1759 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Apr 13 23:29:56.001013 kubelet[1759]: I0413 23:29:56.001007 1759 reconciler.go:29] "Reconciler: start to sync state" Apr 13 23:29:56.002275 kubelet[1759]: I0413 23:29:56.002253 1759 factory.go:223] Registration of the systemd container factory successfully Apr 13 23:29:56.002502 kubelet[1759]: I0413 23:29:56.002456 1759 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 13 23:29:56.004360 kubelet[1759]: I0413 23:29:56.004334 1759 factory.go:223] Registration of the containerd container factory successfully Apr 13 23:29:56.019222 kubelet[1759]: I0413 23:29:56.019166 1759 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 13 23:29:56.021688 kubelet[1759]: I0413 23:29:56.020010 1759 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 13 23:29:56.021688 kubelet[1759]: I0413 23:29:56.020040 1759 state_mem.go:36] "Initialized new in-memory state store" Apr 13 23:29:56.083034 kubelet[1759]: I0413 23:29:56.082506 1759 policy_none.go:49] "None policy: Start" Apr 13 23:29:56.083034 kubelet[1759]: I0413 23:29:56.082537 1759 memory_manager.go:187] "Starting memorymanager" policy="None" Apr 13 23:29:56.083034 kubelet[1759]: I0413 23:29:56.082552 1759 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Apr 13 23:29:56.091535 kubelet[1759]: I0413 23:29:56.090939 1759 policy_none.go:47] "Start" Apr 13 23:29:56.100471 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Apr 13 23:29:56.101550 kubelet[1759]: E0413 23:29:56.101512 1759 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"10.0.0.139\" not found" Apr 13 23:29:56.112742 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Apr 13 23:29:56.116105 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Apr 13 23:29:56.135957 kubelet[1759]: E0413 23:29:56.135786 1759 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 13 23:29:56.136269 kubelet[1759]: I0413 23:29:56.136041 1759 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 13 23:29:56.136269 kubelet[1759]: I0413 23:29:56.136058 1759 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 13 23:29:56.136524 kubelet[1759]: I0413 23:29:56.136328 1759 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 13 23:29:56.140452 kubelet[1759]: E0413 23:29:56.140397 1759 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 13 23:29:56.140561 kubelet[1759]: E0413 23:29:56.140472 1759 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.0.0.139\" not found" Apr 13 23:29:56.155981 kubelet[1759]: I0413 23:29:56.155927 1759 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Apr 13 23:29:56.157378 kubelet[1759]: I0413 23:29:56.157324 1759 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Apr 13 23:29:56.157378 kubelet[1759]: I0413 23:29:56.157375 1759 status_manager.go:244] "Starting to sync pod status with apiserver" Apr 13 23:29:56.157597 kubelet[1759]: I0413 23:29:56.157419 1759 kubelet.go:2428] "Starting kubelet main sync loop" Apr 13 23:29:56.157624 kubelet[1759]: E0413 23:29:56.157604 1759 kubelet.go:2452] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Apr 13 23:29:56.267587 kubelet[1759]: I0413 23:29:56.255919 1759 kubelet_node_status.go:75] "Attempting to register node" node="10.0.0.139" Apr 13 23:29:56.433742 kubelet[1759]: E0413 23:29:56.433576 1759 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 13 23:29:56.433951 kubelet[1759]: E0413 23:29:56.433860 1759 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"10.0.0.139\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="200ms" Apr 13 23:29:56.433994 kubelet[1759]: E0413 23:29:56.433958 1759 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 13 23:29:56.434073 kubelet[1759]: E0413 23:29:56.434046 1759 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes \"10.0.0.139\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 13 23:29:56.434155 kubelet[1759]: E0413 23:29:56.434121 1759 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 13 23:29:56.435202 kubelet[1759]: E0413 23:29:56.433469 1759 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.139.18a60e745d26d5a9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.139,UID:10.0.0.139,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:10.0.0.139,},FirstTimestamp:2026-04-13 23:29:55.986060713 +0000 UTC m=+1.026331865,LastTimestamp:2026-04-13 23:29:55.986060713 +0000 UTC m=+1.026331865,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.139,}" Apr 13 23:29:56.448947 kubelet[1759]: E0413 23:29:56.448870 1759 kubelet_node_status.go:113] "Unable to register node with API server, error getting existing node" err="nodes \"10.0.0.139\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="10.0.0.139" Apr 13 23:29:56.449075 kubelet[1759]: E0413 23:29:56.448986 1759 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.139.18a60e745de64b3b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.139,UID:10.0.0.139,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:10.0.0.139,},FirstTimestamp:2026-04-13 23:29:55.998608187 +0000 UTC m=+1.038879342,LastTimestamp:2026-04-13 23:29:55.998608187 +0000 UTC m=+1.038879342,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.139,}" Apr 13 23:29:56.544967 kubelet[1759]: E0413 23:29:56.544721 1759 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.139.18a60e745f0ccfb3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.139,UID:10.0.0.139,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node 10.0.0.139 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:10.0.0.139,},FirstTimestamp:2026-04-13 23:29:56.017909683 +0000 UTC m=+1.058180827,LastTimestamp:2026-04-13 23:29:56.017909683 +0000 UTC m=+1.058180827,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.139,}" Apr 13 23:29:56.651052 kubelet[1759]: I0413 23:29:56.650979 1759 kubelet_node_status.go:75] "Attempting to register node" node="10.0.0.139" Apr 13 23:29:56.960458 kubelet[1759]: E0413 23:29:56.957368 1759 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.139.18a60e745f0cfe25 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.139,UID:10.0.0.139,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node 10.0.0.139 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:10.0.0.139,},FirstTimestamp:2026-04-13 23:29:56.017921573 +0000 UTC m=+1.058192725,LastTimestamp:2026-04-13 23:29:56.017921573 +0000 UTC m=+1.058192725,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.139,}" Apr 13 23:29:56.962422 kubelet[1759]: E0413 23:29:56.962361 1759 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"10.0.0.139\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="400ms" Apr 13 23:29:56.978662 kubelet[1759]: E0413 23:29:56.978066 1759 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 13 23:29:57.095484 kubelet[1759]: E0413 23:29:57.095113 1759 kubelet_node_status.go:113] "Unable to register node with API server, error getting existing node" err="nodes \"10.0.0.139\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="10.0.0.139" Apr 13 23:29:57.104436 kubelet[1759]: E0413 23:29:57.102140 1759 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.139.18a60e745f0d0bdf default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.139,UID:10.0.0.139,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node 10.0.0.139 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:10.0.0.139,},FirstTimestamp:2026-04-13 23:29:56.017925087 +0000 UTC m=+1.058196239,LastTimestamp:2026-04-13 23:29:56.017925087 +0000 UTC m=+1.058196239,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.139,}" Apr 13 23:29:57.118384 kubelet[1759]: E0413 23:29:57.115683 1759 event.go:359] "Server rejected event (will not retry!)" err="events \"10.0.0.139.18a60e745f0ccfb3\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.139.18a60e745f0ccfb3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.139,UID:10.0.0.139,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node 10.0.0.139 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:10.0.0.139,},FirstTimestamp:2026-04-13 23:29:56.017909683 +0000 UTC m=+1.058180827,LastTimestamp:2026-04-13 23:29:56.079971358 +0000 UTC m=+1.120242512,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.139,}" Apr 13 23:29:57.284613 kubelet[1759]: E0413 23:29:57.283773 1759 event.go:359] "Server rejected event (will not retry!)" err="events \"10.0.0.139.18a60e745f0cfe25\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.139.18a60e745f0cfe25 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.139,UID:10.0.0.139,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node 10.0.0.139 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:10.0.0.139,},FirstTimestamp:2026-04-13 23:29:56.017921573 +0000 UTC m=+1.058192725,LastTimestamp:2026-04-13 23:29:56.080154111 +0000 UTC m=+1.120425264,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.139,}" Apr 13 23:29:57.336962 kubelet[1759]: E0413 23:29:57.335641 1759 event.go:359] "Server rejected event (will not retry!)" err="events \"10.0.0.139.18a60e745f0d0bdf\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.139.18a60e745f0d0bdf default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.139,UID:10.0.0.139,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node 10.0.0.139 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:10.0.0.139,},FirstTimestamp:2026-04-13 23:29:56.017925087 +0000 UTC m=+1.058196239,LastTimestamp:2026-04-13 23:29:56.080169732 +0000 UTC m=+1.120440894,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.139,}" Apr 13 23:29:57.336962 kubelet[1759]: E0413 23:29:57.336492 1759 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 13 23:29:57.462831 kubelet[1759]: E0413 23:29:57.462706 1759 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"10.0.0.139\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="800ms" Apr 13 23:29:57.502921 kubelet[1759]: I0413 23:29:57.502830 1759 kubelet_node_status.go:75] "Attempting to register node" node="10.0.0.139" Apr 13 23:29:57.584146 kubelet[1759]: E0413 23:29:57.583515 1759 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.139.18a60e746638f7e8 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.139,UID:10.0.0.139,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeAllocatableEnforced,Message:Updated Node Allocatable limit across pods,Source:EventSource{Component:kubelet,Host:10.0.0.139,},FirstTimestamp:2026-04-13 23:29:56.138244072 +0000 UTC m=+1.178515229,LastTimestamp:2026-04-13 23:29:56.138244072 +0000 UTC m=+1.178515229,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.139,}" Apr 13 23:29:57.588859 kubelet[1759]: E0413 23:29:57.588041 1759 event.go:359] "Server rejected event (will not retry!)" err="events \"10.0.0.139.18a60e745f0ccfb3\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.139.18a60e745f0ccfb3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.139,UID:10.0.0.139,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node 10.0.0.139 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:10.0.0.139,},FirstTimestamp:2026-04-13 23:29:56.017909683 +0000 UTC m=+1.058180827,LastTimestamp:2026-04-13 23:29:56.255689963 +0000 UTC m=+1.295961107,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.139,}" Apr 13 23:29:57.724453 kubelet[1759]: E0413 23:29:57.724050 1759 kubelet_node_status.go:113] "Unable to register node with API server, error getting existing node" err="nodes \"10.0.0.139\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="10.0.0.139" Apr 13 23:29:57.775992 kubelet[1759]: E0413 23:29:57.775320 1759 event.go:359] "Server rejected event (will not retry!)" err="events \"10.0.0.139.18a60e745f0cfe25\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.139.18a60e745f0cfe25 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.139,UID:10.0.0.139,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node 10.0.0.139 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:10.0.0.139,},FirstTimestamp:2026-04-13 23:29:56.017921573 +0000 UTC m=+1.058192725,LastTimestamp:2026-04-13 23:29:56.255728719 +0000 UTC m=+1.295999871,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.139,}" Apr 13 23:29:57.799833 kubelet[1759]: E0413 23:29:57.799548 1759 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 13 23:29:57.822303 kubelet[1759]: E0413 23:29:57.821603 1759 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes \"10.0.0.139\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 13 23:29:57.901955 kubelet[1759]: E0413 23:29:57.901617 1759 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 13 23:29:57.985502 kubelet[1759]: E0413 23:29:57.985331 1759 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 13 23:29:57.991141 kubelet[1759]: E0413 23:29:57.988722 1759 event.go:359] "Server rejected event (will not retry!)" err="events \"10.0.0.139.18a60e745f0d0bdf\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.139.18a60e745f0d0bdf default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.139,UID:10.0.0.139,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node 10.0.0.139 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:10.0.0.139,},FirstTimestamp:2026-04-13 23:29:56.017925087 +0000 UTC m=+1.058196239,LastTimestamp:2026-04-13 23:29:56.255731994 +0000 UTC m=+1.296003146,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.139,}" Apr 13 23:29:58.157929 kubelet[1759]: E0413 23:29:58.139495 1759 event.go:359] "Server rejected event (will not retry!)" err="events \"10.0.0.139.18a60e745f0ccfb3\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.139.18a60e745f0ccfb3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.139,UID:10.0.0.139,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node 10.0.0.139 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:10.0.0.139,},FirstTimestamp:2026-04-13 23:29:56.017909683 +0000 UTC m=+1.058180827,LastTimestamp:2026-04-13 23:29:56.650933085 +0000 UTC m=+1.691204238,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.139,}" Apr 13 23:29:58.340176 kubelet[1759]: E0413 23:29:58.339934 1759 event.go:359] "Server rejected event (will not retry!)" err="events \"10.0.0.139.18a60e745f0cfe25\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.139.18a60e745f0cfe25 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.139,UID:10.0.0.139,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node 10.0.0.139 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:10.0.0.139,},FirstTimestamp:2026-04-13 23:29:56.017921573 +0000 UTC m=+1.058192725,LastTimestamp:2026-04-13 23:29:56.650942949 +0000 UTC m=+1.691214110,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.139,}" Apr 13 23:29:58.500612 kubelet[1759]: E0413 23:29:58.499665 1759 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"10.0.0.139\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="1.6s" Apr 13 23:29:58.526902 kubelet[1759]: I0413 23:29:58.525776 1759 kubelet_node_status.go:75] "Attempting to register node" node="10.0.0.139" Apr 13 23:29:58.543144 kubelet[1759]: E0413 23:29:58.542675 1759 event.go:359] "Server rejected event (will not retry!)" err="events \"10.0.0.139.18a60e745f0d0bdf\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.139.18a60e745f0d0bdf default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.139,UID:10.0.0.139,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node 10.0.0.139 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:10.0.0.139,},FirstTimestamp:2026-04-13 23:29:56.017925087 +0000 UTC m=+1.058196239,LastTimestamp:2026-04-13 23:29:56.650946872 +0000 UTC m=+1.691218027,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.139,}" Apr 13 23:29:58.635944 kubelet[1759]: E0413 23:29:58.635864 1759 kubelet_node_status.go:113] "Unable to register node with API server, error getting existing node" err="nodes \"10.0.0.139\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="10.0.0.139" Apr 13 23:29:58.636705 kubelet[1759]: E0413 23:29:58.635895 1759 event.go:359] "Server rejected event (will not retry!)" err="events \"10.0.0.139.18a60e745f0ccfb3\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.139.18a60e745f0ccfb3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.139,UID:10.0.0.139,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node 10.0.0.139 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:10.0.0.139,},FirstTimestamp:2026-04-13 23:29:56.017909683 +0000 UTC m=+1.058180827,LastTimestamp:2026-04-13 23:29:57.502746049 +0000 UTC m=+2.543017198,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.139,}" Apr 13 23:29:58.682836 kubelet[1759]: E0413 23:29:58.682564 1759 event.go:359] "Server rejected event (will not retry!)" err="events \"10.0.0.139.18a60e745f0cfe25\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.139.18a60e745f0cfe25 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.139,UID:10.0.0.139,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node 10.0.0.139 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:10.0.0.139,},FirstTimestamp:2026-04-13 23:29:56.017921573 +0000 UTC m=+1.058192725,LastTimestamp:2026-04-13 23:29:57.502756799 +0000 UTC m=+2.543027947,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.139,}" Apr 13 23:29:58.847833 kubelet[1759]: E0413 23:29:58.847652 1759 event.go:359] "Server rejected event (will not retry!)" err="events \"10.0.0.139.18a60e745f0d0bdf\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.139.18a60e745f0d0bdf default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.139,UID:10.0.0.139,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node 10.0.0.139 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:10.0.0.139,},FirstTimestamp:2026-04-13 23:29:56.017925087 +0000 UTC m=+1.058196239,LastTimestamp:2026-04-13 23:29:57.502762012 +0000 UTC m=+2.543033164,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.139,}" Apr 13 23:29:58.985730 kubelet[1759]: E0413 23:29:58.985572 1759 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 13 23:29:59.001833 kubelet[1759]: E0413 23:29:59.001686 1759 event.go:359] "Server rejected event (will not retry!)" err="events \"10.0.0.139.18a60e745f0ccfb3\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.139.18a60e745f0ccfb3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.139,UID:10.0.0.139,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node 10.0.0.139 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:10.0.0.139,},FirstTimestamp:2026-04-13 23:29:56.017909683 +0000 UTC m=+1.058180827,LastTimestamp:2026-04-13 23:29:57.556274554 +0000 UTC m=+2.596545701,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.139,}" Apr 13 23:29:59.028367 kubelet[1759]: E0413 23:29:59.024887 1759 event.go:359] "Server rejected event (will not retry!)" err="events \"10.0.0.139.18a60e745f0cfe25\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.139.18a60e745f0cfe25 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.139,UID:10.0.0.139,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node 10.0.0.139 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:10.0.0.139,},FirstTimestamp:2026-04-13 23:29:56.017921573 +0000 UTC m=+1.058192725,LastTimestamp:2026-04-13 23:29:57.5563031 +0000 UTC m=+2.596574258,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.139,}" Apr 13 23:29:59.128637 kubelet[1759]: E0413 23:29:59.128173 1759 event.go:359] "Server rejected event (will not retry!)" err="events \"10.0.0.139.18a60e745f0d0bdf\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.139.18a60e745f0d0bdf default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.139,UID:10.0.0.139,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node 10.0.0.139 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:10.0.0.139,},FirstTimestamp:2026-04-13 23:29:56.017925087 +0000 UTC m=+1.058196239,LastTimestamp:2026-04-13 23:29:57.556309374 +0000 UTC m=+2.596580531,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.139,}" Apr 13 23:29:59.191625 kubelet[1759]: E0413 23:29:59.191429 1759 event.go:359] "Server rejected event (will not retry!)" err="events \"10.0.0.139.18a60e745f0ccfb3\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.139.18a60e745f0ccfb3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.139,UID:10.0.0.139,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node 10.0.0.139 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:10.0.0.139,},FirstTimestamp:2026-04-13 23:29:56.017909683 +0000 UTC m=+1.058180827,LastTimestamp:2026-04-13 23:29:58.525723208 +0000 UTC m=+3.565994356,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.139,}" Apr 13 23:29:59.320517 kubelet[1759]: E0413 23:29:59.320272 1759 event.go:359] "Server rejected event (will not retry!)" err="events \"10.0.0.139.18a60e745f0cfe25\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.139.18a60e745f0cfe25 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.139,UID:10.0.0.139,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node 10.0.0.139 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:10.0.0.139,},FirstTimestamp:2026-04-13 23:29:56.017921573 +0000 UTC m=+1.058192725,LastTimestamp:2026-04-13 23:29:58.52573521 +0000 UTC m=+3.566006366,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.139,}" Apr 13 23:29:59.412393 kubelet[1759]: E0413 23:29:59.412072 1759 event.go:359] "Server rejected event (will not retry!)" err="events \"10.0.0.139.18a60e745f0d0bdf\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.139.18a60e745f0d0bdf default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.139,UID:10.0.0.139,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node 10.0.0.139 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:10.0.0.139,},FirstTimestamp:2026-04-13 23:29:56.017925087 +0000 UTC m=+1.058196239,LastTimestamp:2026-04-13 23:29:58.525740597 +0000 UTC m=+3.566011753,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.139,}" Apr 13 23:29:59.559899 kubelet[1759]: E0413 23:29:59.558995 1759 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 13 23:29:59.559899 kubelet[1759]: E0413 23:29:59.559370 1759 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes \"10.0.0.139\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 13 23:29:59.980076 kubelet[1759]: E0413 23:29:59.976604 1759 event.go:359] "Server rejected event (will not retry!)" err="events \"10.0.0.139.18a60e745f0ccfb3\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.139.18a60e745f0ccfb3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.139,UID:10.0.0.139,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node 10.0.0.139 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:10.0.0.139,},FirstTimestamp:2026-04-13 23:29:56.017909683 +0000 UTC m=+1.058180827,LastTimestamp:2026-04-13 23:29:59.863085826 +0000 UTC m=+4.903356971,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.139,}" Apr 13 23:29:59.986924 kubelet[1759]: E0413 23:29:59.986222 1759 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 13 23:30:00.049288 kubelet[1759]: E0413 23:30:00.048126 1759 event.go:359] "Server rejected event (will not retry!)" err="events \"10.0.0.139.18a60e745f0cfe25\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.139.18a60e745f0cfe25 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.139,UID:10.0.0.139,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node 10.0.0.139 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:10.0.0.139,},FirstTimestamp:2026-04-13 23:29:56.017921573 +0000 UTC m=+1.058192725,LastTimestamp:2026-04-13 23:29:59.863095758 +0000 UTC m=+4.903366912,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.139,}" Apr 13 23:30:00.152492 kubelet[1759]: E0413 23:30:00.152291 1759 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"10.0.0.139\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="3.2s" Apr 13 23:30:00.287502 kubelet[1759]: I0413 23:30:00.287407 1759 kubelet_node_status.go:75] "Attempting to register node" node="10.0.0.139" Apr 13 23:30:00.390015 kubelet[1759]: E0413 23:30:00.389917 1759 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 13 23:30:00.660393 kubelet[1759]: E0413 23:30:00.660161 1759 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 13 23:30:00.670903 kubelet[1759]: E0413 23:30:00.669606 1759 kubelet_node_status.go:113] "Unable to register node with API server, error getting existing node" err="nodes \"10.0.0.139\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="10.0.0.139" Apr 13 23:30:00.987534 kubelet[1759]: E0413 23:30:00.986568 1759 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 13 23:30:01.988598 kubelet[1759]: E0413 23:30:01.988422 1759 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 13 23:30:02.945774 kubelet[1759]: E0413 23:30:02.945491 1759 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes \"10.0.0.139\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 13 23:30:02.988933 kubelet[1759]: E0413 23:30:02.988754 1759 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 13 23:30:03.516525 kubelet[1759]: E0413 23:30:03.516406 1759 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"10.0.0.139\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="6.4s" Apr 13 23:30:03.878293 kubelet[1759]: I0413 23:30:03.878034 1759 kubelet_node_status.go:75] "Attempting to register node" node="10.0.0.139" Apr 13 23:30:03.990052 kubelet[1759]: E0413 23:30:03.989874 1759 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 13 23:30:04.247745 kubelet[1759]: E0413 23:30:04.247532 1759 kubelet_node_status.go:113] "Unable to register node with API server, error getting existing node" err="nodes \"10.0.0.139\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="10.0.0.139" Apr 13 23:30:04.991686 kubelet[1759]: E0413 23:30:04.990986 1759 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 13 23:30:05.496251 kubelet[1759]: E0413 23:30:05.494757 1759 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 13 23:30:05.939996 kubelet[1759]: E0413 23:30:05.939438 1759 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 13 23:30:05.995992 kubelet[1759]: E0413 23:30:05.995872 1759 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 13 23:30:06.140646 kubelet[1759]: E0413 23:30:06.140564 1759 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.0.0.139\" not found" Apr 13 23:30:06.910032 kubelet[1759]: E0413 23:30:06.909951 1759 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 13 23:30:06.997031 kubelet[1759]: E0413 23:30:06.996936 1759 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 13 23:30:07.997487 kubelet[1759]: E0413 23:30:07.997339 1759 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 13 23:30:08.998125 kubelet[1759]: E0413 23:30:08.997932 1759 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 13 23:30:09.960936 kubelet[1759]: E0413 23:30:09.960868 1759 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"10.0.0.139\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Apr 13 23:30:09.998761 kubelet[1759]: E0413 23:30:09.998691 1759 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 13 23:30:10.649929 kubelet[1759]: I0413 23:30:10.649877 1759 kubelet_node_status.go:75] "Attempting to register node" node="10.0.0.139" Apr 13 23:30:10.933160 kubelet[1759]: E0413 23:30:10.932951 1759 kubelet_node_status.go:113] "Unable to register node with API server, error getting existing node" err="nodes \"10.0.0.139\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="10.0.0.139" Apr 13 23:30:10.999619 kubelet[1759]: E0413 23:30:10.999522 1759 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 13 23:30:11.129032 kubelet[1759]: E0413 23:30:11.128901 1759 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes \"10.0.0.139\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 13 23:30:12.001172 kubelet[1759]: E0413 23:30:12.000924 1759 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 13 23:30:13.001600 kubelet[1759]: E0413 23:30:13.001523 1759 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 13 23:30:13.090131 kubelet[1759]: E0413 23:30:13.090061 1759 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 13 23:30:14.002036 kubelet[1759]: E0413 23:30:14.001955 1759 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 13 23:30:15.002605 kubelet[1759]: E0413 23:30:15.002391 1759 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 13 23:30:15.982501 kubelet[1759]: E0413 23:30:15.982392 1759 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 13 23:30:16.003468 kubelet[1759]: E0413 23:30:16.003294 1759 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 13 23:30:16.142322 kubelet[1759]: E0413 23:30:16.142072 1759 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.0.0.139\" not found" Apr 13 23:30:17.004626 kubelet[1759]: E0413 23:30:17.004228 1759 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 13 23:30:17.131608 kubelet[1759]: E0413 23:30:17.131465 1759 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"10.0.0.139\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Apr 13 23:30:17.935215 kubelet[1759]: I0413 23:30:17.935139 1759 kubelet_node_status.go:75] "Attempting to register node" node="10.0.0.139" Apr 13 23:30:17.969720 kubelet[1759]: E0413 23:30:17.969637 1759 kubelet_node_status.go:113] "Unable to register node with API server, error getting existing node" err="nodes \"10.0.0.139\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="10.0.0.139" Apr 13 23:30:18.005099 kubelet[1759]: E0413 23:30:18.004980 1759 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 13 23:30:18.532846 kubelet[1759]: E0413 23:30:18.532646 1759 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 13 23:30:18.766907 kubelet[1759]: E0413 23:30:18.766787 1759 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 13 23:30:19.006195 kubelet[1759]: E0413 23:30:19.006041 1759 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 13 23:30:20.006707 kubelet[1759]: E0413 23:30:20.006504 1759 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 13 23:30:21.007839 kubelet[1759]: E0413 23:30:21.007732 1759 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 13 23:30:22.009412 kubelet[1759]: E0413 23:30:22.009262 1759 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 13 23:30:23.010673 kubelet[1759]: E0413 23:30:23.010582 1759 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 13 23:30:24.011873 kubelet[1759]: E0413 23:30:24.011753 1759 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 13 23:30:24.343421 kubelet[1759]: E0413 23:30:24.343358 1759 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"10.0.0.139\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Apr 13 23:30:24.971173 kubelet[1759]: I0413 23:30:24.971130 1759 kubelet_node_status.go:75] "Attempting to register node" node="10.0.0.139" Apr 13 23:30:25.012826 kubelet[1759]: E0413 23:30:25.012555 1759 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 13 23:30:25.017456 kubelet[1759]: E0413 23:30:25.017389 1759 kubelet_node_status.go:113] "Unable to register node with API server, error getting existing node" err="nodes \"10.0.0.139\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="10.0.0.139" Apr 13 23:30:26.013062 kubelet[1759]: E0413 23:30:26.012944 1759 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 13 23:30:26.142633 kubelet[1759]: E0413 23:30:26.142378 1759 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.0.0.139\" not found" Apr 13 23:30:27.014117 kubelet[1759]: E0413 23:30:27.013740 1759 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 13 23:30:27.945352 kubelet[1759]: I0413 23:30:27.945236 1759 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Apr 13 23:30:28.014663 kubelet[1759]: E0413 23:30:28.014590 1759 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 13 23:30:28.475743 sudo[1631]: pam_unix(sudo:session): session closed for user root Apr 13 23:30:28.479249 sshd[1628]: pam_unix(sshd:session): session closed for user core Apr 13 23:30:28.483194 systemd[1]: sshd@6-10.0.0.139:22-10.0.0.1:49692.service: Deactivated successfully. Apr 13 23:30:28.490966 systemd[1]: session-7.scope: Deactivated successfully. Apr 13 23:30:28.497679 systemd-logind[1442]: Session 7 logged out. Waiting for processes to exit. Apr 13 23:30:28.505379 systemd-logind[1442]: Removed session 7. Apr 13 23:30:29.015985 kubelet[1759]: E0413 23:30:29.015891 1759 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 13 23:30:29.197957 update_engine[1443]: I20260413 23:30:29.195207 1443 update_attempter.cc:509] Updating boot flags... Apr 13 23:30:29.307239 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 31 scanned by (udev-worker) (1810) Apr 13 23:30:29.358367 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 31 scanned by (udev-worker) (1814) Apr 13 23:30:30.017417 kubelet[1759]: E0413 23:30:30.016953 1759 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 13 23:30:30.414883 kubelet[1759]: E0413 23:30:30.414666 1759 csi_plugin.go:399] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "10.0.0.139" not found Apr 13 23:30:31.019854 kubelet[1759]: E0413 23:30:31.019416 1759 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 13 23:30:31.421780 kubelet[1759]: E0413 23:30:31.421704 1759 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"10.0.0.139\" not found" node="10.0.0.139" Apr 13 23:30:31.752018 kubelet[1759]: E0413 23:30:31.751671 1759 csi_plugin.go:399] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "10.0.0.139" not found Apr 13 23:30:32.019912 kubelet[1759]: E0413 23:30:32.019719 1759 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 13 23:30:32.019912 kubelet[1759]: I0413 23:30:32.019740 1759 kubelet_node_status.go:75] "Attempting to register node" node="10.0.0.139" Apr 13 23:30:32.054672 kubelet[1759]: I0413 23:30:32.054570 1759 kubelet_node_status.go:78] "Successfully registered node" node="10.0.0.139" Apr 13 23:30:32.054672 kubelet[1759]: E0413 23:30:32.054624 1759 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"10.0.0.139\": node \"10.0.0.139\" not found" Apr 13 23:30:32.290412 kubelet[1759]: E0413 23:30:32.290321 1759 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"10.0.0.139\" not found" Apr 13 23:30:32.391372 kubelet[1759]: E0413 23:30:32.391282 1759 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"10.0.0.139\" not found" Apr 13 23:30:32.492067 kubelet[1759]: E0413 23:30:32.491967 1759 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"10.0.0.139\" not found" Apr 13 23:30:32.593388 kubelet[1759]: E0413 23:30:32.592976 1759 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"10.0.0.139\" not found" Apr 13 23:30:32.694404 kubelet[1759]: E0413 23:30:32.694153 1759 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"10.0.0.139\" not found" Apr 13 23:30:32.795464 kubelet[1759]: E0413 23:30:32.795206 1759 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"10.0.0.139\" not found" Apr 13 23:30:32.895994 kubelet[1759]: E0413 23:30:32.895608 1759 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"10.0.0.139\" not found" Apr 13 23:30:32.996981 kubelet[1759]: E0413 23:30:32.996716 1759 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"10.0.0.139\" not found" Apr 13 23:30:33.020738 kubelet[1759]: E0413 23:30:33.020647 1759 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 13 23:30:33.097111 kubelet[1759]: E0413 23:30:33.096947 1759 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"10.0.0.139\" not found" Apr 13 23:30:33.198342 kubelet[1759]: E0413 23:30:33.198069 1759 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"10.0.0.139\" not found" Apr 13 23:30:33.298787 kubelet[1759]: E0413 23:30:33.298703 1759 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"10.0.0.139\" not found" Apr 13 23:30:33.399943 kubelet[1759]: E0413 23:30:33.399834 1759 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"10.0.0.139\" not found" Apr 13 23:30:33.500874 kubelet[1759]: E0413 23:30:33.500532 1759 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"10.0.0.139\" not found" Apr 13 23:30:33.601650 kubelet[1759]: E0413 23:30:33.601535 1759 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"10.0.0.139\" not found" Apr 13 23:30:33.702018 kubelet[1759]: E0413 23:30:33.701756 1759 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"10.0.0.139\" not found" Apr 13 23:30:33.802575 kubelet[1759]: E0413 23:30:33.802484 1759 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"10.0.0.139\" not found" Apr 13 23:30:33.903607 kubelet[1759]: E0413 23:30:33.903493 1759 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"10.0.0.139\" not found" Apr 13 23:30:34.004520 kubelet[1759]: E0413 23:30:34.004240 1759 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"10.0.0.139\" not found" Apr 13 23:30:34.021483 kubelet[1759]: E0413 23:30:34.021386 1759 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 13 23:30:34.105579 kubelet[1759]: E0413 23:30:34.105400 1759 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"10.0.0.139\" not found" Apr 13 23:30:34.206566 kubelet[1759]: E0413 23:30:34.206458 1759 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"10.0.0.139\" not found" Apr 13 23:30:34.307857 kubelet[1759]: E0413 23:30:34.307711 1759 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"10.0.0.139\" not found" Apr 13 23:30:34.408301 kubelet[1759]: E0413 23:30:34.408076 1759 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"10.0.0.139\" not found" Apr 13 23:30:34.509191 kubelet[1759]: E0413 23:30:34.509081 1759 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"10.0.0.139\" not found" Apr 13 23:30:34.610393 kubelet[1759]: E0413 23:30:34.609946 1759 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"10.0.0.139\" not found" Apr 13 23:30:34.711233 kubelet[1759]: E0413 23:30:34.711042 1759 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"10.0.0.139\" not found" Apr 13 23:30:34.812176 kubelet[1759]: E0413 23:30:34.812071 1759 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"10.0.0.139\" not found" Apr 13 23:30:34.912358 kubelet[1759]: E0413 23:30:34.912284 1759 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"10.0.0.139\" not found" Apr 13 23:30:35.013881 kubelet[1759]: E0413 23:30:35.013306 1759 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"10.0.0.139\" not found" Apr 13 23:30:35.022679 kubelet[1759]: E0413 23:30:35.022613 1759 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 13 23:30:35.114760 kubelet[1759]: E0413 23:30:35.114487 1759 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"10.0.0.139\" not found" Apr 13 23:30:35.215692 kubelet[1759]: E0413 23:30:35.215522 1759 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"10.0.0.139\" not found" Apr 13 23:30:35.315956 kubelet[1759]: E0413 23:30:35.315873 1759 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"10.0.0.139\" not found" Apr 13 23:30:35.417417 kubelet[1759]: E0413 23:30:35.417305 1759 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"10.0.0.139\" not found" Apr 13 23:30:35.518672 kubelet[1759]: E0413 23:30:35.518556 1759 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"10.0.0.139\" not found" Apr 13 23:30:35.619701 kubelet[1759]: E0413 23:30:35.619487 1759 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"10.0.0.139\" not found" Apr 13 23:30:35.720046 kubelet[1759]: E0413 23:30:35.719855 1759 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"10.0.0.139\" not found" Apr 13 23:30:35.821183 kubelet[1759]: E0413 23:30:35.821040 1759 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"10.0.0.139\" not found" Apr 13 23:30:35.921983 kubelet[1759]: E0413 23:30:35.921564 1759 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"10.0.0.139\" not found" Apr 13 23:30:35.978482 kubelet[1759]: E0413 23:30:35.978254 1759 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 13 23:30:36.022577 kubelet[1759]: E0413 23:30:36.022281 1759 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"10.0.0.139\" not found" Apr 13 23:30:36.023885 kubelet[1759]: E0413 23:30:36.023711 1759 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 13 23:30:36.123493 kubelet[1759]: E0413 23:30:36.123359 1759 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"10.0.0.139\" not found" Apr 13 23:30:36.143238 kubelet[1759]: E0413 23:30:36.143135 1759 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.0.0.139\" not found" Apr 13 23:30:36.224208 kubelet[1759]: E0413 23:30:36.223592 1759 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"10.0.0.139\" not found" Apr 13 23:30:36.324734 kubelet[1759]: E0413 23:30:36.324633 1759 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"10.0.0.139\" not found" Apr 13 23:30:36.438779 kubelet[1759]: E0413 23:30:36.438276 1759 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"10.0.0.139\" not found" Apr 13 23:30:36.540213 kubelet[1759]: I0413 23:30:36.540170 1759 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Apr 13 23:30:36.540752 containerd[1460]: time="2026-04-13T23:30:36.540609077Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 13 23:30:36.541214 kubelet[1759]: I0413 23:30:36.541193 1759 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Apr 13 23:30:37.010235 kubelet[1759]: I0413 23:30:37.009771 1759 apiserver.go:52] "Watching apiserver" Apr 13 23:30:37.024190 kubelet[1759]: E0413 23:30:37.024096 1759 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 13 23:30:37.063123 systemd[1]: Created slice kubepods-besteffort-pod5965e3ef_ce16_4534_b76a_c12199261661.slice - libcontainer container kubepods-besteffort-pod5965e3ef_ce16_4534_b76a_c12199261661.slice. Apr 13 23:30:37.101643 kubelet[1759]: I0413 23:30:37.101578 1759 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Apr 13 23:30:37.141061 kubelet[1759]: I0413 23:30:37.140954 1759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/5965e3ef-ce16-4534-b76a-c12199261661-kube-proxy\") pod \"kube-proxy-x4tn2\" (UID: \"5965e3ef-ce16-4534-b76a-c12199261661\") " pod="kube-system/kube-proxy-x4tn2" Apr 13 23:30:37.141061 kubelet[1759]: I0413 23:30:37.141004 1759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5965e3ef-ce16-4534-b76a-c12199261661-xtables-lock\") pod \"kube-proxy-x4tn2\" (UID: \"5965e3ef-ce16-4534-b76a-c12199261661\") " pod="kube-system/kube-proxy-x4tn2" Apr 13 23:30:37.141061 kubelet[1759]: I0413 23:30:37.141039 1759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5965e3ef-ce16-4534-b76a-c12199261661-lib-modules\") pod \"kube-proxy-x4tn2\" (UID: \"5965e3ef-ce16-4534-b76a-c12199261661\") " pod="kube-system/kube-proxy-x4tn2" Apr 13 23:30:37.141061 kubelet[1759]: I0413 23:30:37.141054 1759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x85bb\" (UniqueName: \"kubernetes.io/projected/5965e3ef-ce16-4534-b76a-c12199261661-kube-api-access-x85bb\") pod \"kube-proxy-x4tn2\" (UID: \"5965e3ef-ce16-4534-b76a-c12199261661\") " pod="kube-system/kube-proxy-x4tn2" Apr 13 23:30:37.378893 kubelet[1759]: E0413 23:30:37.378853 1759 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:30:37.379783 containerd[1460]: time="2026-04-13T23:30:37.379710706Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-x4tn2,Uid:5965e3ef-ce16-4534-b76a-c12199261661,Namespace:kube-system,Attempt:0,}" Apr 13 23:30:37.847592 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1136918663.mount: Deactivated successfully. Apr 13 23:30:37.854993 containerd[1460]: time="2026-04-13T23:30:37.854916316Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 13 23:30:37.855567 containerd[1460]: time="2026-04-13T23:30:37.855406994Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=311988" Apr 13 23:30:37.856439 containerd[1460]: time="2026-04-13T23:30:37.856404061Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 13 23:30:37.858093 containerd[1460]: time="2026-04-13T23:30:37.858052125Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 13 23:30:37.858993 containerd[1460]: time="2026-04-13T23:30:37.858948381Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 479.122318ms" Apr 13 23:30:37.937935 containerd[1460]: time="2026-04-13T23:30:37.937195977Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 23:30:37.937935 containerd[1460]: time="2026-04-13T23:30:37.937870898Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 23:30:37.937935 containerd[1460]: time="2026-04-13T23:30:37.937881454Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 23:30:37.938598 containerd[1460]: time="2026-04-13T23:30:37.938493573Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 23:30:37.995159 systemd[1]: Started cri-containerd-90224ccf7576967e76d1a00af41d3b460fef75311ecde61332218ce7938071ef.scope - libcontainer container 90224ccf7576967e76d1a00af41d3b460fef75311ecde61332218ce7938071ef. Apr 13 23:30:38.015548 containerd[1460]: time="2026-04-13T23:30:38.015496257Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-x4tn2,Uid:5965e3ef-ce16-4534-b76a-c12199261661,Namespace:kube-system,Attempt:0,} returns sandbox id \"90224ccf7576967e76d1a00af41d3b460fef75311ecde61332218ce7938071ef\"" Apr 13 23:30:38.017120 kubelet[1759]: E0413 23:30:38.017051 1759 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:30:38.018190 containerd[1460]: time="2026-04-13T23:30:38.018151621Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.6\"" Apr 13 23:30:38.024392 kubelet[1759]: E0413 23:30:38.024326 1759 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 13 23:30:38.942000 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3773609225.mount: Deactivated successfully. Apr 13 23:30:39.025010 kubelet[1759]: E0413 23:30:39.024933 1759 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 13 23:30:39.185831 containerd[1460]: time="2026-04-13T23:30:39.185708052Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 23:30:39.186403 containerd[1460]: time="2026-04-13T23:30:39.186367428Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.6: active requests=0, bytes read=25861668" Apr 13 23:30:39.187268 containerd[1460]: time="2026-04-13T23:30:39.187238089Z" level=info msg="ImageCreate event name:\"sha256:6aec52d4adc8d0a6a397bdec1614d94e59c8e1720b80d72933691489106ece1e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 23:30:39.189285 containerd[1460]: time="2026-04-13T23:30:39.189202821Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:d0921102f744d15133bc3a1cb54d8cbf323e00f2f73ea5a79c763202c6db18aa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 23:30:39.190182 containerd[1460]: time="2026-04-13T23:30:39.190130721Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.6\" with image id \"sha256:6aec52d4adc8d0a6a397bdec1614d94e59c8e1720b80d72933691489106ece1e\", repo tag \"registry.k8s.io/kube-proxy:v1.34.6\", repo digest \"registry.k8s.io/kube-proxy@sha256:d0921102f744d15133bc3a1cb54d8cbf323e00f2f73ea5a79c763202c6db18aa\", size \"25860793\" in 1.171935631s" Apr 13 23:30:39.190182 containerd[1460]: time="2026-04-13T23:30:39.190165491Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.6\" returns image reference \"sha256:6aec52d4adc8d0a6a397bdec1614d94e59c8e1720b80d72933691489106ece1e\"" Apr 13 23:30:39.195113 containerd[1460]: time="2026-04-13T23:30:39.194973661Z" level=info msg="CreateContainer within sandbox \"90224ccf7576967e76d1a00af41d3b460fef75311ecde61332218ce7938071ef\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Apr 13 23:30:39.208779 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2644309921.mount: Deactivated successfully. Apr 13 23:30:39.211986 containerd[1460]: time="2026-04-13T23:30:39.211934487Z" level=info msg="CreateContainer within sandbox \"90224ccf7576967e76d1a00af41d3b460fef75311ecde61332218ce7938071ef\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"a305f6e5adbea0e8f0112d0003b1e63836ffae240d9b7fb139ebacfee3a3cc0c\"" Apr 13 23:30:39.213346 containerd[1460]: time="2026-04-13T23:30:39.213294702Z" level=info msg="StartContainer for \"a305f6e5adbea0e8f0112d0003b1e63836ffae240d9b7fb139ebacfee3a3cc0c\"" Apr 13 23:30:39.285039 systemd[1]: Started cri-containerd-a305f6e5adbea0e8f0112d0003b1e63836ffae240d9b7fb139ebacfee3a3cc0c.scope - libcontainer container a305f6e5adbea0e8f0112d0003b1e63836ffae240d9b7fb139ebacfee3a3cc0c. Apr 13 23:30:39.309657 containerd[1460]: time="2026-04-13T23:30:39.309480531Z" level=info msg="StartContainer for \"a305f6e5adbea0e8f0112d0003b1e63836ffae240d9b7fb139ebacfee3a3cc0c\" returns successfully" Apr 13 23:30:39.337969 kubelet[1759]: E0413 23:30:39.337673 1759 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:30:39.588118 kubelet[1759]: I0413 23:30:39.587779 1759 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-x4tn2" podStartSLOduration=6.414042809 podStartE2EDuration="7.587746551s" podCreationTimestamp="2026-04-13 23:30:32 +0000 UTC" firstStartedPulling="2026-04-13 23:30:38.017711365 +0000 UTC m=+43.057982510" lastFinishedPulling="2026-04-13 23:30:39.191415108 +0000 UTC m=+44.231686252" observedRunningTime="2026-04-13 23:30:39.512436515 +0000 UTC m=+44.552707677" watchObservedRunningTime="2026-04-13 23:30:39.587746551 +0000 UTC m=+44.628017698" Apr 13 23:30:40.026358 kubelet[1759]: E0413 23:30:40.026245 1759 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 13 23:30:40.352645 kubelet[1759]: E0413 23:30:40.351752 1759 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:30:41.026656 kubelet[1759]: E0413 23:30:41.026551 1759 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 13 23:30:42.028050 kubelet[1759]: E0413 23:30:42.027884 1759 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 13 23:30:43.029227 kubelet[1759]: E0413 23:30:43.028958 1759 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 13 23:30:44.030841 kubelet[1759]: E0413 23:30:44.030639 1759 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 13 23:30:45.031892 kubelet[1759]: E0413 23:30:45.031757 1759 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 13 23:30:46.033221 kubelet[1759]: E0413 23:30:46.033085 1759 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 13 23:30:47.034298 kubelet[1759]: E0413 23:30:47.034222 1759 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 13 23:30:48.035616 kubelet[1759]: E0413 23:30:48.035531 1759 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 13 23:30:49.036302 kubelet[1759]: E0413 23:30:49.036191 1759 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 13 23:30:50.036524 kubelet[1759]: E0413 23:30:50.036433 1759 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 13 23:30:51.036889 kubelet[1759]: E0413 23:30:51.036765 1759 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 13 23:30:52.038377 kubelet[1759]: E0413 23:30:52.038241 1759 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 13 23:30:53.038859 kubelet[1759]: E0413 23:30:53.038762 1759 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 13 23:30:54.040195 kubelet[1759]: E0413 23:30:54.039992 1759 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 13 23:30:55.042005 kubelet[1759]: E0413 23:30:55.041013 1759 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 13 23:30:56.002084 kubelet[1759]: E0413 23:30:56.001505 1759 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 13 23:30:56.042560 kubelet[1759]: E0413 23:30:56.042485 1759 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 13 23:30:57.043437 kubelet[1759]: E0413 23:30:57.043270 1759 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 13 23:30:58.044068 kubelet[1759]: E0413 23:30:58.044001 1759 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 13 23:30:59.045158 kubelet[1759]: E0413 23:30:59.045078 1759 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 13 23:30:59.662296 systemd[1]: Created slice kubepods-besteffort-pod8aceaa95_8094_4a4f_87ce_d1cbb1fbc529.slice - libcontainer container kubepods-besteffort-pod8aceaa95_8094_4a4f_87ce_d1cbb1fbc529.slice. Apr 13 23:30:59.799713 kubelet[1759]: I0413 23:30:59.799659 1759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8aceaa95-8094-4a4f-87ce-d1cbb1fbc529-tigera-ca-bundle\") pod \"calico-typha-f85564b54-xk79c\" (UID: \"8aceaa95-8094-4a4f-87ce-d1cbb1fbc529\") " pod="calico-system/calico-typha-f85564b54-xk79c" Apr 13 23:30:59.799713 kubelet[1759]: I0413 23:30:59.799716 1759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/8aceaa95-8094-4a4f-87ce-d1cbb1fbc529-typha-certs\") pod \"calico-typha-f85564b54-xk79c\" (UID: \"8aceaa95-8094-4a4f-87ce-d1cbb1fbc529\") " pod="calico-system/calico-typha-f85564b54-xk79c" Apr 13 23:30:59.800011 kubelet[1759]: I0413 23:30:59.799751 1759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pxc77\" (UniqueName: \"kubernetes.io/projected/8aceaa95-8094-4a4f-87ce-d1cbb1fbc529-kube-api-access-pxc77\") pod \"calico-typha-f85564b54-xk79c\" (UID: \"8aceaa95-8094-4a4f-87ce-d1cbb1fbc529\") " pod="calico-system/calico-typha-f85564b54-xk79c" Apr 13 23:30:59.968518 kubelet[1759]: E0413 23:30:59.968348 1759 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:30:59.969400 containerd[1460]: time="2026-04-13T23:30:59.969288405Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-f85564b54-xk79c,Uid:8aceaa95-8094-4a4f-87ce-d1cbb1fbc529,Namespace:calico-system,Attempt:0,}" Apr 13 23:30:59.996696 containerd[1460]: time="2026-04-13T23:30:59.995929484Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 23:30:59.996696 containerd[1460]: time="2026-04-13T23:30:59.996664645Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 23:30:59.997083 containerd[1460]: time="2026-04-13T23:30:59.996726052Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 23:30:59.997083 containerd[1460]: time="2026-04-13T23:30:59.996995516Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 23:31:00.022156 systemd[1]: Started cri-containerd-3cfd7ed5a2faf1c3059ae0a749c857275f91ce84a1852ead3751791efb2f95ee.scope - libcontainer container 3cfd7ed5a2faf1c3059ae0a749c857275f91ce84a1852ead3751791efb2f95ee. Apr 13 23:31:00.070395 kubelet[1759]: E0413 23:31:00.070354 1759 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 13 23:31:00.095124 containerd[1460]: time="2026-04-13T23:31:00.095087109Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-f85564b54-xk79c,Uid:8aceaa95-8094-4a4f-87ce-d1cbb1fbc529,Namespace:calico-system,Attempt:0,} returns sandbox id \"3cfd7ed5a2faf1c3059ae0a749c857275f91ce84a1852ead3751791efb2f95ee\"" Apr 13 23:31:00.095865 kubelet[1759]: E0413 23:31:00.095837 1759 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:31:00.096777 containerd[1460]: time="2026-04-13T23:31:00.096754050Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\"" Apr 13 23:31:00.524762 systemd[1]: Created slice kubepods-besteffort-pod85ade17f_a41e_4d97_ae5b_8655dd7e2fc2.slice - libcontainer container kubepods-besteffort-pod85ade17f_a41e_4d97_ae5b_8655dd7e2fc2.slice. Apr 13 23:31:00.573959 kubelet[1759]: E0413 23:31:00.573898 1759 status_manager.go:1018] "Failed to get status for pod" err="pods \"calico-node-z9jhc\" is forbidden: User \"system:node:10.0.0.139\" cannot get resource \"pods\" in API group \"\" in the namespace \"calico-system\": no relationship found between node '10.0.0.139' and this object" podUID="85ade17f-a41e-4d97-ae5b-8655dd7e2fc2" pod="calico-system/calico-node-z9jhc" Apr 13 23:31:00.705305 kubelet[1759]: I0413 23:31:00.705198 1759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/85ade17f-a41e-4d97-ae5b-8655dd7e2fc2-lib-modules\") pod \"calico-node-z9jhc\" (UID: \"85ade17f-a41e-4d97-ae5b-8655dd7e2fc2\") " pod="calico-system/calico-node-z9jhc" Apr 13 23:31:00.705305 kubelet[1759]: I0413 23:31:00.705273 1759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/85ade17f-a41e-4d97-ae5b-8655dd7e2fc2-bpffs\") pod \"calico-node-z9jhc\" (UID: \"85ade17f-a41e-4d97-ae5b-8655dd7e2fc2\") " pod="calico-system/calico-node-z9jhc" Apr 13 23:31:00.705305 kubelet[1759]: I0413 23:31:00.705298 1759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/85ade17f-a41e-4d97-ae5b-8655dd7e2fc2-cni-log-dir\") pod \"calico-node-z9jhc\" (UID: \"85ade17f-a41e-4d97-ae5b-8655dd7e2fc2\") " pod="calico-system/calico-node-z9jhc" Apr 13 23:31:00.705578 kubelet[1759]: I0413 23:31:00.705339 1759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/85ade17f-a41e-4d97-ae5b-8655dd7e2fc2-policysync\") pod \"calico-node-z9jhc\" (UID: \"85ade17f-a41e-4d97-ae5b-8655dd7e2fc2\") " pod="calico-system/calico-node-z9jhc" Apr 13 23:31:00.705578 kubelet[1759]: I0413 23:31:00.705378 1759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/85ade17f-a41e-4d97-ae5b-8655dd7e2fc2-tigera-ca-bundle\") pod \"calico-node-z9jhc\" (UID: \"85ade17f-a41e-4d97-ae5b-8655dd7e2fc2\") " pod="calico-system/calico-node-z9jhc" Apr 13 23:31:00.705578 kubelet[1759]: I0413 23:31:00.705393 1759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nodeproc\" (UniqueName: \"kubernetes.io/host-path/85ade17f-a41e-4d97-ae5b-8655dd7e2fc2-nodeproc\") pod \"calico-node-z9jhc\" (UID: \"85ade17f-a41e-4d97-ae5b-8655dd7e2fc2\") " pod="calico-system/calico-node-z9jhc" Apr 13 23:31:00.705578 kubelet[1759]: I0413 23:31:00.705404 1759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/85ade17f-a41e-4d97-ae5b-8655dd7e2fc2-xtables-lock\") pod \"calico-node-z9jhc\" (UID: \"85ade17f-a41e-4d97-ae5b-8655dd7e2fc2\") " pod="calico-system/calico-node-z9jhc" Apr 13 23:31:00.705578 kubelet[1759]: I0413 23:31:00.705425 1759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/85ade17f-a41e-4d97-ae5b-8655dd7e2fc2-node-certs\") pod \"calico-node-z9jhc\" (UID: \"85ade17f-a41e-4d97-ae5b-8655dd7e2fc2\") " pod="calico-system/calico-node-z9jhc" Apr 13 23:31:00.705676 kubelet[1759]: I0413 23:31:00.705442 1759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/85ade17f-a41e-4d97-ae5b-8655dd7e2fc2-cni-bin-dir\") pod \"calico-node-z9jhc\" (UID: \"85ade17f-a41e-4d97-ae5b-8655dd7e2fc2\") " pod="calico-system/calico-node-z9jhc" Apr 13 23:31:00.705676 kubelet[1759]: I0413 23:31:00.705458 1759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/85ade17f-a41e-4d97-ae5b-8655dd7e2fc2-var-lib-calico\") pod \"calico-node-z9jhc\" (UID: \"85ade17f-a41e-4d97-ae5b-8655dd7e2fc2\") " pod="calico-system/calico-node-z9jhc" Apr 13 23:31:00.705676 kubelet[1759]: I0413 23:31:00.705506 1759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/85ade17f-a41e-4d97-ae5b-8655dd7e2fc2-cni-net-dir\") pod \"calico-node-z9jhc\" (UID: \"85ade17f-a41e-4d97-ae5b-8655dd7e2fc2\") " pod="calico-system/calico-node-z9jhc" Apr 13 23:31:00.705676 kubelet[1759]: I0413 23:31:00.705546 1759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/85ade17f-a41e-4d97-ae5b-8655dd7e2fc2-flexvol-driver-host\") pod \"calico-node-z9jhc\" (UID: \"85ade17f-a41e-4d97-ae5b-8655dd7e2fc2\") " pod="calico-system/calico-node-z9jhc" Apr 13 23:31:00.705676 kubelet[1759]: I0413 23:31:00.705589 1759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/85ade17f-a41e-4d97-ae5b-8655dd7e2fc2-sys-fs\") pod \"calico-node-z9jhc\" (UID: \"85ade17f-a41e-4d97-ae5b-8655dd7e2fc2\") " pod="calico-system/calico-node-z9jhc" Apr 13 23:31:00.706091 kubelet[1759]: I0413 23:31:00.705609 1759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/85ade17f-a41e-4d97-ae5b-8655dd7e2fc2-var-run-calico\") pod \"calico-node-z9jhc\" (UID: \"85ade17f-a41e-4d97-ae5b-8655dd7e2fc2\") " pod="calico-system/calico-node-z9jhc" Apr 13 23:31:00.706091 kubelet[1759]: I0413 23:31:00.705636 1759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zg69z\" (UniqueName: \"kubernetes.io/projected/85ade17f-a41e-4d97-ae5b-8655dd7e2fc2-kube-api-access-zg69z\") pod \"calico-node-z9jhc\" (UID: \"85ade17f-a41e-4d97-ae5b-8655dd7e2fc2\") " pod="calico-system/calico-node-z9jhc" Apr 13 23:31:00.808933 kubelet[1759]: E0413 23:31:00.808894 1759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 23:31:00.808933 kubelet[1759]: W0413 23:31:00.808928 1759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 23:31:00.808933 kubelet[1759]: E0413 23:31:00.808953 1759 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 23:31:00.810598 kubelet[1759]: E0413 23:31:00.810531 1759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 23:31:00.810598 kubelet[1759]: W0413 23:31:00.810555 1759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 23:31:00.810598 kubelet[1759]: E0413 23:31:00.810574 1759 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 23:31:00.865160 kubelet[1759]: E0413 23:31:00.865112 1759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 23:31:00.865160 kubelet[1759]: W0413 23:31:00.865137 1759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 23:31:00.865160 kubelet[1759]: E0413 23:31:00.865154 1759 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 23:31:01.071230 kubelet[1759]: E0413 23:31:01.070636 1759 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 13 23:31:01.130553 containerd[1460]: time="2026-04-13T23:31:01.130469192Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-z9jhc,Uid:85ade17f-a41e-4d97-ae5b-8655dd7e2fc2,Namespace:calico-system,Attempt:0,}" Apr 13 23:31:01.155012 containerd[1460]: time="2026-04-13T23:31:01.154845454Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 23:31:01.155012 containerd[1460]: time="2026-04-13T23:31:01.154957687Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 23:31:01.155012 containerd[1460]: time="2026-04-13T23:31:01.154975654Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 23:31:01.155250 containerd[1460]: time="2026-04-13T23:31:01.155157625Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 23:31:01.185288 systemd[1]: Started cri-containerd-3a34e81551f42ba425ceacd54af292c6ebec9c32f861a905eeb38905514f1978.scope - libcontainer container 3a34e81551f42ba425ceacd54af292c6ebec9c32f861a905eeb38905514f1978. Apr 13 23:31:01.205306 containerd[1460]: time="2026-04-13T23:31:01.205199104Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-z9jhc,Uid:85ade17f-a41e-4d97-ae5b-8655dd7e2fc2,Namespace:calico-system,Attempt:0,} returns sandbox id \"3a34e81551f42ba425ceacd54af292c6ebec9c32f861a905eeb38905514f1978\"" Apr 13 23:31:01.338422 kubelet[1759]: E0413 23:31:01.337984 1759 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r2cfg" podUID="db378709-9093-42e0-99e6-ba9beb70b60d" Apr 13 23:31:01.412062 kubelet[1759]: E0413 23:31:01.412001 1759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 23:31:01.412062 kubelet[1759]: W0413 23:31:01.412056 1759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 23:31:01.412221 kubelet[1759]: E0413 23:31:01.412079 1759 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 23:31:01.412299 kubelet[1759]: E0413 23:31:01.412276 1759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 23:31:01.412299 kubelet[1759]: W0413 23:31:01.412294 1759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 23:31:01.412332 kubelet[1759]: E0413 23:31:01.412306 1759 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 23:31:01.412539 kubelet[1759]: E0413 23:31:01.412513 1759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 23:31:01.412539 kubelet[1759]: W0413 23:31:01.412527 1759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 23:31:01.412539 kubelet[1759]: E0413 23:31:01.412538 1759 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 23:31:01.412834 kubelet[1759]: E0413 23:31:01.412729 1759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 23:31:01.412834 kubelet[1759]: W0413 23:31:01.412742 1759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 23:31:01.412834 kubelet[1759]: E0413 23:31:01.412749 1759 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 23:31:01.413147 kubelet[1759]: E0413 23:31:01.413110 1759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 23:31:01.413147 kubelet[1759]: W0413 23:31:01.413143 1759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 23:31:01.413219 kubelet[1759]: E0413 23:31:01.413156 1759 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 23:31:01.413424 kubelet[1759]: E0413 23:31:01.413326 1759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 23:31:01.413424 kubelet[1759]: W0413 23:31:01.413339 1759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 23:31:01.413424 kubelet[1759]: E0413 23:31:01.413350 1759 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 23:31:01.413630 kubelet[1759]: E0413 23:31:01.413555 1759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 23:31:01.413630 kubelet[1759]: W0413 23:31:01.413562 1759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 23:31:01.413630 kubelet[1759]: E0413 23:31:01.413570 1759 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 23:31:01.413775 kubelet[1759]: E0413 23:31:01.413757 1759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 23:31:01.413775 kubelet[1759]: W0413 23:31:01.413769 1759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 23:31:01.413775 kubelet[1759]: E0413 23:31:01.413777 1759 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 23:31:01.413998 kubelet[1759]: E0413 23:31:01.413985 1759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 23:31:01.413998 kubelet[1759]: W0413 23:31:01.413996 1759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 23:31:01.414060 kubelet[1759]: E0413 23:31:01.414004 1759 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 23:31:01.414208 kubelet[1759]: E0413 23:31:01.414189 1759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 23:31:01.414208 kubelet[1759]: W0413 23:31:01.414203 1759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 23:31:01.414208 kubelet[1759]: E0413 23:31:01.414210 1759 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 23:31:01.414384 kubelet[1759]: E0413 23:31:01.414362 1759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 23:31:01.414384 kubelet[1759]: W0413 23:31:01.414377 1759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 23:31:01.414384 kubelet[1759]: E0413 23:31:01.414386 1759 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 23:31:01.414609 kubelet[1759]: E0413 23:31:01.414595 1759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 23:31:01.414609 kubelet[1759]: W0413 23:31:01.414607 1759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 23:31:01.414690 kubelet[1759]: E0413 23:31:01.414614 1759 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 23:31:01.414838 kubelet[1759]: E0413 23:31:01.414813 1759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 23:31:01.414838 kubelet[1759]: W0413 23:31:01.414832 1759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 23:31:01.414899 kubelet[1759]: E0413 23:31:01.414842 1759 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 23:31:01.415050 kubelet[1759]: E0413 23:31:01.415001 1759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 23:31:01.415050 kubelet[1759]: W0413 23:31:01.415009 1759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 23:31:01.415050 kubelet[1759]: E0413 23:31:01.415015 1759 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 23:31:01.415282 kubelet[1759]: E0413 23:31:01.415265 1759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 23:31:01.415282 kubelet[1759]: W0413 23:31:01.415277 1759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 23:31:01.415338 kubelet[1759]: E0413 23:31:01.415301 1759 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 23:31:01.415489 kubelet[1759]: E0413 23:31:01.415478 1759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 23:31:01.415489 kubelet[1759]: W0413 23:31:01.415488 1759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 23:31:01.415529 kubelet[1759]: E0413 23:31:01.415494 1759 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 23:31:01.415677 kubelet[1759]: E0413 23:31:01.415666 1759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 23:31:01.415677 kubelet[1759]: W0413 23:31:01.415676 1759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 23:31:01.415724 kubelet[1759]: E0413 23:31:01.415681 1759 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 23:31:01.415870 kubelet[1759]: E0413 23:31:01.415859 1759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 23:31:01.415870 kubelet[1759]: W0413 23:31:01.415869 1759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 23:31:01.415910 kubelet[1759]: E0413 23:31:01.415874 1759 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 23:31:01.416055 kubelet[1759]: E0413 23:31:01.416013 1759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 23:31:01.416055 kubelet[1759]: W0413 23:31:01.416024 1759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 23:31:01.416055 kubelet[1759]: E0413 23:31:01.416047 1759 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 23:31:01.416199 kubelet[1759]: E0413 23:31:01.416186 1759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 23:31:01.416199 kubelet[1759]: W0413 23:31:01.416196 1759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 23:31:01.416232 kubelet[1759]: E0413 23:31:01.416202 1759 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 23:31:01.513294 kubelet[1759]: E0413 23:31:01.513236 1759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 23:31:01.513294 kubelet[1759]: W0413 23:31:01.513278 1759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 23:31:01.513294 kubelet[1759]: E0413 23:31:01.513306 1759 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 23:31:01.513481 kubelet[1759]: I0413 23:31:01.513344 1759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/db378709-9093-42e0-99e6-ba9beb70b60d-registration-dir\") pod \"csi-node-driver-r2cfg\" (UID: \"db378709-9093-42e0-99e6-ba9beb70b60d\") " pod="calico-system/csi-node-driver-r2cfg" Apr 13 23:31:01.513913 kubelet[1759]: E0413 23:31:01.513864 1759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 23:31:01.513913 kubelet[1759]: W0413 23:31:01.513900 1759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 23:31:01.514012 kubelet[1759]: E0413 23:31:01.513924 1759 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 23:31:01.514012 kubelet[1759]: I0413 23:31:01.513966 1759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/db378709-9093-42e0-99e6-ba9beb70b60d-socket-dir\") pod \"csi-node-driver-r2cfg\" (UID: \"db378709-9093-42e0-99e6-ba9beb70b60d\") " pod="calico-system/csi-node-driver-r2cfg" Apr 13 23:31:01.514271 kubelet[1759]: E0413 23:31:01.514254 1759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 23:31:01.514271 kubelet[1759]: W0413 23:31:01.514270 1759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 23:31:01.514327 kubelet[1759]: E0413 23:31:01.514280 1759 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 23:31:01.514482 kubelet[1759]: E0413 23:31:01.514459 1759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 23:31:01.514482 kubelet[1759]: W0413 23:31:01.514478 1759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 23:31:01.514482 kubelet[1759]: E0413 23:31:01.514491 1759 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 23:31:01.514718 kubelet[1759]: E0413 23:31:01.514705 1759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 23:31:01.514718 kubelet[1759]: W0413 23:31:01.514716 1759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 23:31:01.514778 kubelet[1759]: E0413 23:31:01.514723 1759 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 23:31:01.514778 kubelet[1759]: I0413 23:31:01.514743 1759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pgcj4\" (UniqueName: \"kubernetes.io/projected/db378709-9093-42e0-99e6-ba9beb70b60d-kube-api-access-pgcj4\") pod \"csi-node-driver-r2cfg\" (UID: \"db378709-9093-42e0-99e6-ba9beb70b60d\") " pod="calico-system/csi-node-driver-r2cfg" Apr 13 23:31:01.515016 kubelet[1759]: E0413 23:31:01.514977 1759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 23:31:01.515072 kubelet[1759]: W0413 23:31:01.515017 1759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 23:31:01.515072 kubelet[1759]: E0413 23:31:01.515028 1759 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 23:31:01.515252 kubelet[1759]: E0413 23:31:01.515233 1759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 23:31:01.515293 kubelet[1759]: W0413 23:31:01.515254 1759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 23:31:01.515293 kubelet[1759]: E0413 23:31:01.515266 1759 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 23:31:01.515473 kubelet[1759]: E0413 23:31:01.515446 1759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 23:31:01.515473 kubelet[1759]: W0413 23:31:01.515464 1759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 23:31:01.515506 kubelet[1759]: E0413 23:31:01.515472 1759 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 23:31:01.515506 kubelet[1759]: I0413 23:31:01.515499 1759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/db378709-9093-42e0-99e6-ba9beb70b60d-kubelet-dir\") pod \"csi-node-driver-r2cfg\" (UID: \"db378709-9093-42e0-99e6-ba9beb70b60d\") " pod="calico-system/csi-node-driver-r2cfg" Apr 13 23:31:01.515670 kubelet[1759]: E0413 23:31:01.515657 1759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 23:31:01.515699 kubelet[1759]: W0413 23:31:01.515669 1759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 23:31:01.515720 kubelet[1759]: E0413 23:31:01.515701 1759 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 23:31:01.515735 kubelet[1759]: I0413 23:31:01.515728 1759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/db378709-9093-42e0-99e6-ba9beb70b60d-varrun\") pod \"csi-node-driver-r2cfg\" (UID: \"db378709-9093-42e0-99e6-ba9beb70b60d\") " pod="calico-system/csi-node-driver-r2cfg" Apr 13 23:31:01.515947 kubelet[1759]: E0413 23:31:01.515932 1759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 23:31:01.515992 kubelet[1759]: W0413 23:31:01.515947 1759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 23:31:01.515992 kubelet[1759]: E0413 23:31:01.515956 1759 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 23:31:01.516140 kubelet[1759]: E0413 23:31:01.516128 1759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 23:31:01.516161 kubelet[1759]: W0413 23:31:01.516139 1759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 23:31:01.516161 kubelet[1759]: E0413 23:31:01.516146 1759 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 23:31:01.516299 kubelet[1759]: E0413 23:31:01.516288 1759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 23:31:01.516299 kubelet[1759]: W0413 23:31:01.516298 1759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 23:31:01.516328 kubelet[1759]: E0413 23:31:01.516304 1759 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 23:31:01.516489 kubelet[1759]: E0413 23:31:01.516471 1759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 23:31:01.516489 kubelet[1759]: W0413 23:31:01.516484 1759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 23:31:01.516489 kubelet[1759]: E0413 23:31:01.516493 1759 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 23:31:01.516665 kubelet[1759]: E0413 23:31:01.516651 1759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 23:31:01.516665 kubelet[1759]: W0413 23:31:01.516664 1759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 23:31:01.516720 kubelet[1759]: E0413 23:31:01.516670 1759 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 23:31:01.516855 kubelet[1759]: E0413 23:31:01.516842 1759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 23:31:01.516855 kubelet[1759]: W0413 23:31:01.516852 1759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 23:31:01.516903 kubelet[1759]: E0413 23:31:01.516858 1759 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 23:31:01.617192 kubelet[1759]: E0413 23:31:01.617017 1759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 23:31:01.617192 kubelet[1759]: W0413 23:31:01.617073 1759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 23:31:01.617192 kubelet[1759]: E0413 23:31:01.617108 1759 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 23:31:01.617390 kubelet[1759]: E0413 23:31:01.617342 1759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 23:31:01.617390 kubelet[1759]: W0413 23:31:01.617350 1759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 23:31:01.617390 kubelet[1759]: E0413 23:31:01.617364 1759 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 23:31:01.617658 kubelet[1759]: E0413 23:31:01.617623 1759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 23:31:01.617658 kubelet[1759]: W0413 23:31:01.617654 1759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 23:31:01.617730 kubelet[1759]: E0413 23:31:01.617667 1759 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 23:31:01.617925 kubelet[1759]: E0413 23:31:01.617911 1759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 23:31:01.617925 kubelet[1759]: W0413 23:31:01.617923 1759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 23:31:01.618004 kubelet[1759]: E0413 23:31:01.617930 1759 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 23:31:01.618156 kubelet[1759]: E0413 23:31:01.618139 1759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 23:31:01.618188 kubelet[1759]: W0413 23:31:01.618155 1759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 23:31:01.618188 kubelet[1759]: E0413 23:31:01.618166 1759 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 23:31:01.618440 kubelet[1759]: E0413 23:31:01.618401 1759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 23:31:01.618440 kubelet[1759]: W0413 23:31:01.618417 1759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 23:31:01.618440 kubelet[1759]: E0413 23:31:01.618424 1759 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 23:31:01.618589 kubelet[1759]: E0413 23:31:01.618576 1759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 23:31:01.618589 kubelet[1759]: W0413 23:31:01.618587 1759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 23:31:01.618652 kubelet[1759]: E0413 23:31:01.618593 1759 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 23:31:01.618828 kubelet[1759]: E0413 23:31:01.618786 1759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 23:31:01.618857 kubelet[1759]: W0413 23:31:01.618831 1759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 23:31:01.618857 kubelet[1759]: E0413 23:31:01.618844 1759 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 23:31:01.619110 kubelet[1759]: E0413 23:31:01.619097 1759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 23:31:01.619110 kubelet[1759]: W0413 23:31:01.619105 1759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 23:31:01.619176 kubelet[1759]: E0413 23:31:01.619115 1759 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 23:31:01.619294 kubelet[1759]: E0413 23:31:01.619281 1759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 23:31:01.619316 kubelet[1759]: W0413 23:31:01.619294 1759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 23:31:01.619316 kubelet[1759]: E0413 23:31:01.619303 1759 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 23:31:01.619522 kubelet[1759]: E0413 23:31:01.619504 1759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 23:31:01.619522 kubelet[1759]: W0413 23:31:01.619519 1759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 23:31:01.619598 kubelet[1759]: E0413 23:31:01.619527 1759 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 23:31:01.619739 kubelet[1759]: E0413 23:31:01.619725 1759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 23:31:01.619764 kubelet[1759]: W0413 23:31:01.619739 1759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 23:31:01.619764 kubelet[1759]: E0413 23:31:01.619749 1759 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 23:31:01.620005 kubelet[1759]: E0413 23:31:01.619984 1759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 23:31:01.620005 kubelet[1759]: W0413 23:31:01.619998 1759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 23:31:01.620005 kubelet[1759]: E0413 23:31:01.620005 1759 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 23:31:01.620215 kubelet[1759]: E0413 23:31:01.620201 1759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 23:31:01.620239 kubelet[1759]: W0413 23:31:01.620216 1759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 23:31:01.620239 kubelet[1759]: E0413 23:31:01.620227 1759 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 23:31:01.620629 kubelet[1759]: E0413 23:31:01.620441 1759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 23:31:01.620629 kubelet[1759]: W0413 23:31:01.620455 1759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 23:31:01.620629 kubelet[1759]: E0413 23:31:01.620464 1759 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 23:31:01.621028 kubelet[1759]: E0413 23:31:01.621007 1759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 23:31:01.621111 kubelet[1759]: W0413 23:31:01.621049 1759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 23:31:01.621111 kubelet[1759]: E0413 23:31:01.621069 1759 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 23:31:01.621323 kubelet[1759]: E0413 23:31:01.621300 1759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 23:31:01.621323 kubelet[1759]: W0413 23:31:01.621319 1759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 23:31:01.621552 kubelet[1759]: E0413 23:31:01.621330 1759 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 23:31:01.621737 kubelet[1759]: E0413 23:31:01.621588 1759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 23:31:01.621737 kubelet[1759]: W0413 23:31:01.621595 1759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 23:31:01.621737 kubelet[1759]: E0413 23:31:01.621603 1759 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 23:31:01.622021 kubelet[1759]: E0413 23:31:01.622005 1759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 23:31:01.622062 kubelet[1759]: W0413 23:31:01.622023 1759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 23:31:01.622062 kubelet[1759]: E0413 23:31:01.622052 1759 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 23:31:01.622337 kubelet[1759]: E0413 23:31:01.622270 1759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 23:31:01.622337 kubelet[1759]: W0413 23:31:01.622303 1759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 23:31:01.622337 kubelet[1759]: E0413 23:31:01.622312 1759 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 23:31:01.622658 kubelet[1759]: E0413 23:31:01.622640 1759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 23:31:01.622658 kubelet[1759]: W0413 23:31:01.622657 1759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 23:31:01.622723 kubelet[1759]: E0413 23:31:01.622668 1759 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 23:31:01.622947 kubelet[1759]: E0413 23:31:01.622928 1759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 23:31:01.622947 kubelet[1759]: W0413 23:31:01.622944 1759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 23:31:01.622947 kubelet[1759]: E0413 23:31:01.622953 1759 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 23:31:01.623164 kubelet[1759]: E0413 23:31:01.623134 1759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 23:31:01.623164 kubelet[1759]: W0413 23:31:01.623149 1759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 23:31:01.623164 kubelet[1759]: E0413 23:31:01.623157 1759 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 23:31:01.623375 kubelet[1759]: E0413 23:31:01.623359 1759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 23:31:01.623375 kubelet[1759]: W0413 23:31:01.623373 1759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 23:31:01.623433 kubelet[1759]: E0413 23:31:01.623380 1759 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 23:31:01.623598 kubelet[1759]: E0413 23:31:01.623584 1759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 23:31:01.623598 kubelet[1759]: W0413 23:31:01.623596 1759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 23:31:01.623648 kubelet[1759]: E0413 23:31:01.623604 1759 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 23:31:01.679663 kubelet[1759]: E0413 23:31:01.679611 1759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 23:31:01.679663 kubelet[1759]: W0413 23:31:01.679642 1759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 23:31:01.679780 kubelet[1759]: E0413 23:31:01.679665 1759 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 23:31:01.964094 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1979517760.mount: Deactivated successfully. Apr 13 23:31:02.086113 kubelet[1759]: E0413 23:31:02.086004 1759 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 13 23:31:02.791857 containerd[1460]: time="2026-04-13T23:31:02.791770171Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 23:31:02.792572 containerd[1460]: time="2026-04-13T23:31:02.792529994Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.31.4: active requests=0, bytes read=36107596" Apr 13 23:31:02.793287 containerd[1460]: time="2026-04-13T23:31:02.793238814Z" level=info msg="ImageCreate event name:\"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 23:31:02.795480 containerd[1460]: time="2026-04-13T23:31:02.795423276Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 23:31:02.796054 containerd[1460]: time="2026-04-13T23:31:02.796000559Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.31.4\" with image id \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\", size \"36107450\" in 2.69922021s" Apr 13 23:31:02.796054 containerd[1460]: time="2026-04-13T23:31:02.796054096Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\" returns image reference \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\"" Apr 13 23:31:02.796897 containerd[1460]: time="2026-04-13T23:31:02.796852122Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\"" Apr 13 23:31:02.804948 containerd[1460]: time="2026-04-13T23:31:02.804886271Z" level=info msg="CreateContainer within sandbox \"3cfd7ed5a2faf1c3059ae0a749c857275f91ce84a1852ead3751791efb2f95ee\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Apr 13 23:31:02.817063 containerd[1460]: time="2026-04-13T23:31:02.816990924Z" level=info msg="CreateContainer within sandbox \"3cfd7ed5a2faf1c3059ae0a749c857275f91ce84a1852ead3751791efb2f95ee\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"57844a7e9fa7f16ac351680bb0f48a6d0be33d88d439b30e0fcf4748357ede62\"" Apr 13 23:31:02.819921 containerd[1460]: time="2026-04-13T23:31:02.818088292Z" level=info msg="StartContainer for \"57844a7e9fa7f16ac351680bb0f48a6d0be33d88d439b30e0fcf4748357ede62\"" Apr 13 23:31:02.848134 systemd[1]: Started cri-containerd-57844a7e9fa7f16ac351680bb0f48a6d0be33d88d439b30e0fcf4748357ede62.scope - libcontainer container 57844a7e9fa7f16ac351680bb0f48a6d0be33d88d439b30e0fcf4748357ede62. Apr 13 23:31:02.885952 containerd[1460]: time="2026-04-13T23:31:02.885716877Z" level=info msg="StartContainer for \"57844a7e9fa7f16ac351680bb0f48a6d0be33d88d439b30e0fcf4748357ede62\" returns successfully" Apr 13 23:31:03.087461 kubelet[1759]: E0413 23:31:03.087232 1759 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 13 23:31:03.158173 kubelet[1759]: E0413 23:31:03.158076 1759 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r2cfg" podUID="db378709-9093-42e0-99e6-ba9beb70b60d" Apr 13 23:31:03.421467 kubelet[1759]: E0413 23:31:03.421179 1759 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:31:03.445016 kubelet[1759]: E0413 23:31:03.444933 1759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 23:31:03.445016 kubelet[1759]: W0413 23:31:03.444980 1759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 23:31:03.445016 kubelet[1759]: E0413 23:31:03.445016 1759 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 23:31:03.445430 kubelet[1759]: E0413 23:31:03.445380 1759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 23:31:03.445430 kubelet[1759]: W0413 23:31:03.445403 1759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 23:31:03.445430 kubelet[1759]: E0413 23:31:03.445417 1759 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 23:31:03.445634 kubelet[1759]: E0413 23:31:03.445605 1759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 23:31:03.445634 kubelet[1759]: W0413 23:31:03.445618 1759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 23:31:03.445634 kubelet[1759]: E0413 23:31:03.445630 1759 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 23:31:03.445894 kubelet[1759]: E0413 23:31:03.445867 1759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 23:31:03.445894 kubelet[1759]: W0413 23:31:03.445881 1759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 23:31:03.445894 kubelet[1759]: E0413 23:31:03.445889 1759 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 23:31:03.446150 kubelet[1759]: E0413 23:31:03.446132 1759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 23:31:03.446150 kubelet[1759]: W0413 23:31:03.446147 1759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 23:31:03.446229 kubelet[1759]: E0413 23:31:03.446157 1759 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 23:31:03.446324 kubelet[1759]: E0413 23:31:03.446309 1759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 23:31:03.446324 kubelet[1759]: W0413 23:31:03.446322 1759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 23:31:03.446378 kubelet[1759]: E0413 23:31:03.446331 1759 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 23:31:03.446491 kubelet[1759]: E0413 23:31:03.446477 1759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 23:31:03.446491 kubelet[1759]: W0413 23:31:03.446490 1759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 23:31:03.446544 kubelet[1759]: E0413 23:31:03.446497 1759 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 23:31:03.446704 kubelet[1759]: E0413 23:31:03.446688 1759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 23:31:03.446704 kubelet[1759]: W0413 23:31:03.446703 1759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 23:31:03.446757 kubelet[1759]: E0413 23:31:03.446716 1759 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 23:31:03.446924 kubelet[1759]: E0413 23:31:03.446910 1759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 23:31:03.446924 kubelet[1759]: W0413 23:31:03.446923 1759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 23:31:03.446979 kubelet[1759]: E0413 23:31:03.446931 1759 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 23:31:03.447098 kubelet[1759]: E0413 23:31:03.447083 1759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 23:31:03.447098 kubelet[1759]: W0413 23:31:03.447097 1759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 23:31:03.447149 kubelet[1759]: E0413 23:31:03.447104 1759 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 23:31:03.447245 kubelet[1759]: E0413 23:31:03.447231 1759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 23:31:03.447245 kubelet[1759]: W0413 23:31:03.447244 1759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 23:31:03.447294 kubelet[1759]: E0413 23:31:03.447252 1759 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 23:31:03.447400 kubelet[1759]: E0413 23:31:03.447386 1759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 23:31:03.447400 kubelet[1759]: W0413 23:31:03.447399 1759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 23:31:03.447449 kubelet[1759]: E0413 23:31:03.447406 1759 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 23:31:03.447673 kubelet[1759]: E0413 23:31:03.447641 1759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 23:31:03.447673 kubelet[1759]: W0413 23:31:03.447656 1759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 23:31:03.447673 kubelet[1759]: E0413 23:31:03.447664 1759 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 23:31:03.447887 kubelet[1759]: E0413 23:31:03.447851 1759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 23:31:03.447887 kubelet[1759]: W0413 23:31:03.447865 1759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 23:31:03.447887 kubelet[1759]: E0413 23:31:03.447872 1759 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 23:31:03.448054 kubelet[1759]: E0413 23:31:03.448023 1759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 23:31:03.448095 kubelet[1759]: W0413 23:31:03.448056 1759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 23:31:03.448095 kubelet[1759]: E0413 23:31:03.448066 1759 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 23:31:03.448651 kubelet[1759]: E0413 23:31:03.448566 1759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 23:31:03.448651 kubelet[1759]: W0413 23:31:03.448597 1759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 23:31:03.448651 kubelet[1759]: E0413 23:31:03.448619 1759 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 23:31:03.448957 kubelet[1759]: E0413 23:31:03.448936 1759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 23:31:03.449028 kubelet[1759]: W0413 23:31:03.448958 1759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 23:31:03.449028 kubelet[1759]: E0413 23:31:03.448972 1759 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 23:31:03.449420 kubelet[1759]: E0413 23:31:03.449235 1759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 23:31:03.449420 kubelet[1759]: W0413 23:31:03.449248 1759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 23:31:03.449420 kubelet[1759]: E0413 23:31:03.449258 1759 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 23:31:03.450666 kubelet[1759]: E0413 23:31:03.449531 1759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 23:31:03.450666 kubelet[1759]: W0413 23:31:03.449582 1759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 23:31:03.450666 kubelet[1759]: E0413 23:31:03.449591 1759 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 23:31:03.450666 kubelet[1759]: E0413 23:31:03.449866 1759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 23:31:03.450666 kubelet[1759]: W0413 23:31:03.449874 1759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 23:31:03.450666 kubelet[1759]: E0413 23:31:03.449883 1759 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 23:31:03.450666 kubelet[1759]: E0413 23:31:03.450208 1759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 23:31:03.450666 kubelet[1759]: W0413 23:31:03.450217 1759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 23:31:03.450666 kubelet[1759]: E0413 23:31:03.450226 1759 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 23:31:03.451025 kubelet[1759]: E0413 23:31:03.450696 1759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 23:31:03.451025 kubelet[1759]: W0413 23:31:03.450717 1759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 23:31:03.451025 kubelet[1759]: E0413 23:31:03.450738 1759 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 23:31:03.452012 kubelet[1759]: E0413 23:31:03.451230 1759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 23:31:03.452012 kubelet[1759]: W0413 23:31:03.451251 1759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 23:31:03.452012 kubelet[1759]: E0413 23:31:03.451267 1759 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 23:31:03.452012 kubelet[1759]: E0413 23:31:03.451503 1759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 23:31:03.452012 kubelet[1759]: W0413 23:31:03.451510 1759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 23:31:03.452012 kubelet[1759]: E0413 23:31:03.451519 1759 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 23:31:03.452012 kubelet[1759]: E0413 23:31:03.452014 1759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 23:31:03.452290 kubelet[1759]: W0413 23:31:03.452023 1759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 23:31:03.452290 kubelet[1759]: E0413 23:31:03.452032 1759 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 23:31:03.452290 kubelet[1759]: E0413 23:31:03.452248 1759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 23:31:03.452290 kubelet[1759]: W0413 23:31:03.452255 1759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 23:31:03.452290 kubelet[1759]: E0413 23:31:03.452264 1759 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 23:31:03.452482 kubelet[1759]: E0413 23:31:03.452452 1759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 23:31:03.452482 kubelet[1759]: W0413 23:31:03.452465 1759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 23:31:03.452482 kubelet[1759]: E0413 23:31:03.452473 1759 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 23:31:03.452722 kubelet[1759]: E0413 23:31:03.452708 1759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 23:31:03.452722 kubelet[1759]: W0413 23:31:03.452718 1759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 23:31:03.452780 kubelet[1759]: E0413 23:31:03.452729 1759 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 23:31:03.453260 kubelet[1759]: E0413 23:31:03.453239 1759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 23:31:03.453260 kubelet[1759]: W0413 23:31:03.453257 1759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 23:31:03.453337 kubelet[1759]: E0413 23:31:03.453291 1759 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 23:31:03.453678 kubelet[1759]: E0413 23:31:03.453662 1759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 23:31:03.453706 kubelet[1759]: W0413 23:31:03.453678 1759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 23:31:03.453706 kubelet[1759]: E0413 23:31:03.453688 1759 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 23:31:03.454444 kubelet[1759]: E0413 23:31:03.454420 1759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 23:31:03.454444 kubelet[1759]: W0413 23:31:03.454441 1759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 23:31:03.454538 kubelet[1759]: E0413 23:31:03.454458 1759 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 23:31:03.454854 kubelet[1759]: E0413 23:31:03.454829 1759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 23:31:03.454854 kubelet[1759]: W0413 23:31:03.454848 1759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 23:31:03.454923 kubelet[1759]: E0413 23:31:03.454858 1759 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 23:31:03.455336 kubelet[1759]: E0413 23:31:03.455309 1759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 23:31:03.455336 kubelet[1759]: W0413 23:31:03.455320 1759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 23:31:03.455336 kubelet[1759]: E0413 23:31:03.455331 1759 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 23:31:04.088183 kubelet[1759]: E0413 23:31:04.087943 1759 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 13 23:31:04.424088 kubelet[1759]: E0413 23:31:04.423731 1759 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:31:04.478065 kubelet[1759]: E0413 23:31:04.478008 1759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 23:31:04.478065 kubelet[1759]: W0413 23:31:04.478033 1759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 23:31:04.478065 kubelet[1759]: E0413 23:31:04.478073 1759 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 23:31:04.478487 kubelet[1759]: E0413 23:31:04.478411 1759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 23:31:04.478487 kubelet[1759]: W0413 23:31:04.478436 1759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 23:31:04.478487 kubelet[1759]: E0413 23:31:04.478452 1759 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 23:31:04.478674 kubelet[1759]: E0413 23:31:04.478656 1759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 23:31:04.478744 kubelet[1759]: W0413 23:31:04.478727 1759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 23:31:04.478773 kubelet[1759]: E0413 23:31:04.478751 1759 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 23:31:04.479253 kubelet[1759]: E0413 23:31:04.479217 1759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 23:31:04.479253 kubelet[1759]: W0413 23:31:04.479244 1759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 23:31:04.479349 kubelet[1759]: E0413 23:31:04.479270 1759 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 23:31:04.479661 kubelet[1759]: E0413 23:31:04.479520 1759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 23:31:04.479661 kubelet[1759]: W0413 23:31:04.479530 1759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 23:31:04.479661 kubelet[1759]: E0413 23:31:04.479539 1759 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 23:31:04.479730 kubelet[1759]: E0413 23:31:04.479699 1759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 23:31:04.479730 kubelet[1759]: W0413 23:31:04.479709 1759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 23:31:04.479759 kubelet[1759]: E0413 23:31:04.479721 1759 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 23:31:04.479998 kubelet[1759]: E0413 23:31:04.479989 1759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 23:31:04.480162 kubelet[1759]: W0413 23:31:04.480073 1759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 23:31:04.480162 kubelet[1759]: E0413 23:31:04.480085 1759 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 23:31:04.480295 kubelet[1759]: E0413 23:31:04.480276 1759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 23:31:04.480314 kubelet[1759]: W0413 23:31:04.480297 1759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 23:31:04.480314 kubelet[1759]: E0413 23:31:04.480310 1759 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 23:31:04.480499 kubelet[1759]: E0413 23:31:04.480486 1759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 23:31:04.480499 kubelet[1759]: W0413 23:31:04.480496 1759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 23:31:04.480590 kubelet[1759]: E0413 23:31:04.480505 1759 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 23:31:04.480752 kubelet[1759]: E0413 23:31:04.480740 1759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 23:31:04.480769 kubelet[1759]: W0413 23:31:04.480753 1759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 23:31:04.480769 kubelet[1759]: E0413 23:31:04.480761 1759 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 23:31:04.480941 kubelet[1759]: E0413 23:31:04.480932 1759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 23:31:04.480941 kubelet[1759]: W0413 23:31:04.480940 1759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 23:31:04.480984 kubelet[1759]: E0413 23:31:04.480948 1759 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 23:31:04.481174 kubelet[1759]: E0413 23:31:04.481138 1759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 23:31:04.481174 kubelet[1759]: W0413 23:31:04.481146 1759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 23:31:04.481174 kubelet[1759]: E0413 23:31:04.481155 1759 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 23:31:04.481404 kubelet[1759]: E0413 23:31:04.481370 1759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 23:31:04.481404 kubelet[1759]: W0413 23:31:04.481401 1759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 23:31:04.481472 kubelet[1759]: E0413 23:31:04.481410 1759 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 23:31:04.481635 kubelet[1759]: E0413 23:31:04.481611 1759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 23:31:04.481635 kubelet[1759]: W0413 23:31:04.481631 1759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 23:31:04.481698 kubelet[1759]: E0413 23:31:04.481645 1759 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 23:31:04.481934 kubelet[1759]: E0413 23:31:04.481916 1759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 23:31:04.481934 kubelet[1759]: W0413 23:31:04.481933 1759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 23:31:04.482010 kubelet[1759]: E0413 23:31:04.481944 1759 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 23:31:04.497446 kubelet[1759]: I0413 23:31:04.497388 1759 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-f85564b54-xk79c" podStartSLOduration=2.797148908 podStartE2EDuration="5.497370692s" podCreationTimestamp="2026-04-13 23:30:59 +0000 UTC" firstStartedPulling="2026-04-13 23:31:00.096520157 +0000 UTC m=+65.136791307" lastFinishedPulling="2026-04-13 23:31:02.796741947 +0000 UTC m=+67.837013091" observedRunningTime="2026-04-13 23:31:03.516283048 +0000 UTC m=+68.556554203" watchObservedRunningTime="2026-04-13 23:31:04.497370692 +0000 UTC m=+69.537641846" Apr 13 23:31:04.532908 containerd[1460]: time="2026-04-13T23:31:04.532687575Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 23:31:04.533862 containerd[1460]: time="2026-04-13T23:31:04.533671926Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4: active requests=0, bytes read=4630250" Apr 13 23:31:04.534916 containerd[1460]: time="2026-04-13T23:31:04.534864506Z" level=info msg="ImageCreate event name:\"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 23:31:04.536576 containerd[1460]: time="2026-04-13T23:31:04.536529885Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 23:31:04.537570 containerd[1460]: time="2026-04-13T23:31:04.537511623Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" with image id \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\", size \"6186255\" in 1.740627041s" Apr 13 23:31:04.537570 containerd[1460]: time="2026-04-13T23:31:04.537556262Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" returns image reference \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\"" Apr 13 23:31:04.542148 containerd[1460]: time="2026-04-13T23:31:04.542087701Z" level=info msg="CreateContainer within sandbox \"3a34e81551f42ba425ceacd54af292c6ebec9c32f861a905eeb38905514f1978\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Apr 13 23:31:04.555874 containerd[1460]: time="2026-04-13T23:31:04.555836954Z" level=info msg="CreateContainer within sandbox \"3a34e81551f42ba425ceacd54af292c6ebec9c32f861a905eeb38905514f1978\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"6fb67f95d7b24dcd9638b23fb63c9083a64ceb7e8e79b94c46e91184489776d2\"" Apr 13 23:31:04.556964 containerd[1460]: time="2026-04-13T23:31:04.556845244Z" level=info msg="StartContainer for \"6fb67f95d7b24dcd9638b23fb63c9083a64ceb7e8e79b94c46e91184489776d2\"" Apr 13 23:31:04.579006 kubelet[1759]: E0413 23:31:04.578950 1759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 23:31:04.579006 kubelet[1759]: W0413 23:31:04.579002 1759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 23:31:04.579166 kubelet[1759]: E0413 23:31:04.579023 1759 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 23:31:04.579371 kubelet[1759]: E0413 23:31:04.579294 1759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 23:31:04.579371 kubelet[1759]: W0413 23:31:04.579318 1759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 23:31:04.579371 kubelet[1759]: E0413 23:31:04.579325 1759 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 23:31:04.579629 kubelet[1759]: E0413 23:31:04.579556 1759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 23:31:04.579629 kubelet[1759]: W0413 23:31:04.579567 1759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 23:31:04.579629 kubelet[1759]: E0413 23:31:04.579577 1759 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 23:31:04.579824 kubelet[1759]: E0413 23:31:04.579778 1759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 23:31:04.579843 kubelet[1759]: W0413 23:31:04.579823 1759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 23:31:04.579843 kubelet[1759]: E0413 23:31:04.579831 1759 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 23:31:04.580109 kubelet[1759]: E0413 23:31:04.580066 1759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 23:31:04.580109 kubelet[1759]: W0413 23:31:04.580078 1759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 23:31:04.580109 kubelet[1759]: E0413 23:31:04.580085 1759 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 23:31:04.580299 kubelet[1759]: E0413 23:31:04.580278 1759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 23:31:04.580299 kubelet[1759]: W0413 23:31:04.580294 1759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 23:31:04.580350 kubelet[1759]: E0413 23:31:04.580304 1759 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 23:31:04.580965 kubelet[1759]: E0413 23:31:04.580591 1759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 23:31:04.580965 kubelet[1759]: W0413 23:31:04.580616 1759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 23:31:04.580965 kubelet[1759]: E0413 23:31:04.580628 1759 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 23:31:04.580965 kubelet[1759]: E0413 23:31:04.580889 1759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 23:31:04.580965 kubelet[1759]: W0413 23:31:04.580896 1759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 23:31:04.580965 kubelet[1759]: E0413 23:31:04.580905 1759 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 23:31:04.581134 kubelet[1759]: E0413 23:31:04.581076 1759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 23:31:04.581134 kubelet[1759]: W0413 23:31:04.581082 1759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 23:31:04.581134 kubelet[1759]: E0413 23:31:04.581090 1759 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 23:31:04.581284 kubelet[1759]: E0413 23:31:04.581254 1759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 23:31:04.581284 kubelet[1759]: W0413 23:31:04.581273 1759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 23:31:04.581284 kubelet[1759]: E0413 23:31:04.581282 1759 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 23:31:04.581491 kubelet[1759]: E0413 23:31:04.581476 1759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 23:31:04.581511 kubelet[1759]: W0413 23:31:04.581490 1759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 23:31:04.581511 kubelet[1759]: E0413 23:31:04.581499 1759 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 23:31:04.581701 kubelet[1759]: E0413 23:31:04.581685 1759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 23:31:04.581739 kubelet[1759]: W0413 23:31:04.581701 1759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 23:31:04.581739 kubelet[1759]: E0413 23:31:04.581710 1759 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 23:31:04.582001 kubelet[1759]: E0413 23:31:04.581987 1759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 23:31:04.582001 kubelet[1759]: W0413 23:31:04.582001 1759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 23:31:04.582001 kubelet[1759]: E0413 23:31:04.582010 1759 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 23:31:04.582394 kubelet[1759]: E0413 23:31:04.582379 1759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 23:31:04.582394 kubelet[1759]: W0413 23:31:04.582392 1759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 23:31:04.582451 kubelet[1759]: E0413 23:31:04.582401 1759 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 23:31:04.582596 kubelet[1759]: E0413 23:31:04.582582 1759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 23:31:04.582616 kubelet[1759]: W0413 23:31:04.582596 1759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 23:31:04.582616 kubelet[1759]: E0413 23:31:04.582603 1759 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 23:31:04.582992 kubelet[1759]: E0413 23:31:04.582978 1759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 23:31:04.583010 kubelet[1759]: W0413 23:31:04.582993 1759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 23:31:04.583010 kubelet[1759]: E0413 23:31:04.583001 1759 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 23:31:04.583468 kubelet[1759]: E0413 23:31:04.583438 1759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 23:31:04.583468 kubelet[1759]: W0413 23:31:04.583464 1759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 23:31:04.583557 kubelet[1759]: E0413 23:31:04.583483 1759 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 23:31:04.583692 kubelet[1759]: E0413 23:31:04.583676 1759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 23:31:04.583709 kubelet[1759]: W0413 23:31:04.583694 1759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 23:31:04.583709 kubelet[1759]: E0413 23:31:04.583703 1759 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 23:31:04.586109 systemd[1]: Started cri-containerd-6fb67f95d7b24dcd9638b23fb63c9083a64ceb7e8e79b94c46e91184489776d2.scope - libcontainer container 6fb67f95d7b24dcd9638b23fb63c9083a64ceb7e8e79b94c46e91184489776d2. Apr 13 23:31:04.613829 containerd[1460]: time="2026-04-13T23:31:04.613760005Z" level=info msg="StartContainer for \"6fb67f95d7b24dcd9638b23fb63c9083a64ceb7e8e79b94c46e91184489776d2\" returns successfully" Apr 13 23:31:04.621289 systemd[1]: cri-containerd-6fb67f95d7b24dcd9638b23fb63c9083a64ceb7e8e79b94c46e91184489776d2.scope: Deactivated successfully. Apr 13 23:31:04.640326 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6fb67f95d7b24dcd9638b23fb63c9083a64ceb7e8e79b94c46e91184489776d2-rootfs.mount: Deactivated successfully. Apr 13 23:31:04.824054 containerd[1460]: time="2026-04-13T23:31:04.823939248Z" level=info msg="shim disconnected" id=6fb67f95d7b24dcd9638b23fb63c9083a64ceb7e8e79b94c46e91184489776d2 namespace=k8s.io Apr 13 23:31:04.824054 containerd[1460]: time="2026-04-13T23:31:04.824030380Z" level=warning msg="cleaning up after shim disconnected" id=6fb67f95d7b24dcd9638b23fb63c9083a64ceb7e8e79b94c46e91184489776d2 namespace=k8s.io Apr 13 23:31:04.824054 containerd[1460]: time="2026-04-13T23:31:04.824059805Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 13 23:31:05.101962 kubelet[1759]: E0413 23:31:05.101775 1759 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 13 23:31:05.158577 kubelet[1759]: E0413 23:31:05.158450 1759 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r2cfg" podUID="db378709-9093-42e0-99e6-ba9beb70b60d" Apr 13 23:31:05.427285 kubelet[1759]: E0413 23:31:05.427122 1759 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:31:05.428203 containerd[1460]: time="2026-04-13T23:31:05.428168259Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\"" Apr 13 23:31:06.102369 kubelet[1759]: E0413 23:31:06.102192 1759 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 13 23:31:06.429600 kubelet[1759]: E0413 23:31:06.429444 1759 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:31:07.103284 kubelet[1759]: E0413 23:31:07.103228 1759 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 13 23:31:07.158624 kubelet[1759]: E0413 23:31:07.158494 1759 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r2cfg" podUID="db378709-9093-42e0-99e6-ba9beb70b60d" Apr 13 23:31:08.104015 kubelet[1759]: E0413 23:31:08.103887 1759 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 13 23:31:09.104520 kubelet[1759]: E0413 23:31:09.104463 1759 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 13 23:31:09.157983 kubelet[1759]: E0413 23:31:09.157890 1759 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r2cfg" podUID="db378709-9093-42e0-99e6-ba9beb70b60d" Apr 13 23:31:10.104751 kubelet[1759]: E0413 23:31:10.104673 1759 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 13 23:31:11.105148 kubelet[1759]: E0413 23:31:11.105035 1759 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 13 23:31:11.158913 kubelet[1759]: E0413 23:31:11.158851 1759 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r2cfg" podUID="db378709-9093-42e0-99e6-ba9beb70b60d" Apr 13 23:31:12.105411 kubelet[1759]: E0413 23:31:12.105292 1759 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 13 23:31:13.105970 kubelet[1759]: E0413 23:31:13.105919 1759 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 13 23:31:13.158367 kubelet[1759]: E0413 23:31:13.158303 1759 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r2cfg" podUID="db378709-9093-42e0-99e6-ba9beb70b60d" Apr 13 23:31:13.815473 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2343493035.mount: Deactivated successfully. Apr 13 23:31:13.932904 containerd[1460]: time="2026-04-13T23:31:13.932709631Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 23:31:13.933394 containerd[1460]: time="2026-04-13T23:31:13.933352039Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.31.4: active requests=0, bytes read=159838564" Apr 13 23:31:13.934623 containerd[1460]: time="2026-04-13T23:31:13.934556809Z" level=info msg="ImageCreate event name:\"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 23:31:13.936737 containerd[1460]: time="2026-04-13T23:31:13.936669567Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 23:31:13.937131 containerd[1460]: time="2026-04-13T23:31:13.937105265Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.31.4\" with image id \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\", repo tag \"ghcr.io/flatcar/calico/node:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\", size \"159838426\" in 8.508893634s" Apr 13 23:31:13.937185 containerd[1460]: time="2026-04-13T23:31:13.937137672Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\" returns image reference \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\"" Apr 13 23:31:13.941493 containerd[1460]: time="2026-04-13T23:31:13.941449873Z" level=info msg="CreateContainer within sandbox \"3a34e81551f42ba425ceacd54af292c6ebec9c32f861a905eeb38905514f1978\" for container &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,}" Apr 13 23:31:13.959289 containerd[1460]: time="2026-04-13T23:31:13.959192726Z" level=info msg="CreateContainer within sandbox \"3a34e81551f42ba425ceacd54af292c6ebec9c32f861a905eeb38905514f1978\" for &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,} returns container id \"15a4403d6adcb3a760f7adf2b44b0d36385b147156e63dea8c3c3d7ef1458b16\"" Apr 13 23:31:13.962056 containerd[1460]: time="2026-04-13T23:31:13.961988954Z" level=info msg="StartContainer for \"15a4403d6adcb3a760f7adf2b44b0d36385b147156e63dea8c3c3d7ef1458b16\"" Apr 13 23:31:14.000505 systemd[1]: Started cri-containerd-15a4403d6adcb3a760f7adf2b44b0d36385b147156e63dea8c3c3d7ef1458b16.scope - libcontainer container 15a4403d6adcb3a760f7adf2b44b0d36385b147156e63dea8c3c3d7ef1458b16. Apr 13 23:31:14.043695 containerd[1460]: time="2026-04-13T23:31:14.043408018Z" level=info msg="StartContainer for \"15a4403d6adcb3a760f7adf2b44b0d36385b147156e63dea8c3c3d7ef1458b16\" returns successfully" Apr 13 23:31:14.106424 kubelet[1759]: E0413 23:31:14.106297 1759 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 13 23:31:14.123185 systemd[1]: cri-containerd-15a4403d6adcb3a760f7adf2b44b0d36385b147156e63dea8c3c3d7ef1458b16.scope: Deactivated successfully. Apr 13 23:31:14.271326 containerd[1460]: time="2026-04-13T23:31:14.271238914Z" level=info msg="shim disconnected" id=15a4403d6adcb3a760f7adf2b44b0d36385b147156e63dea8c3c3d7ef1458b16 namespace=k8s.io Apr 13 23:31:14.271326 containerd[1460]: time="2026-04-13T23:31:14.271315923Z" level=warning msg="cleaning up after shim disconnected" id=15a4403d6adcb3a760f7adf2b44b0d36385b147156e63dea8c3c3d7ef1458b16 namespace=k8s.io Apr 13 23:31:14.271326 containerd[1460]: time="2026-04-13T23:31:14.271325348Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 13 23:31:14.449716 containerd[1460]: time="2026-04-13T23:31:14.449569601Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\"" Apr 13 23:31:14.816707 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-15a4403d6adcb3a760f7adf2b44b0d36385b147156e63dea8c3c3d7ef1458b16-rootfs.mount: Deactivated successfully. Apr 13 23:31:15.107776 kubelet[1759]: E0413 23:31:15.107570 1759 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 13 23:31:15.158744 kubelet[1759]: E0413 23:31:15.158535 1759 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r2cfg" podUID="db378709-9093-42e0-99e6-ba9beb70b60d" Apr 13 23:31:15.978364 kubelet[1759]: E0413 23:31:15.978240 1759 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 13 23:31:16.108342 kubelet[1759]: E0413 23:31:16.108223 1759 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 13 23:31:17.109217 kubelet[1759]: E0413 23:31:17.109125 1759 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 13 23:31:17.184202 kubelet[1759]: E0413 23:31:17.184111 1759 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r2cfg" podUID="db378709-9093-42e0-99e6-ba9beb70b60d" Apr 13 23:31:18.110124 kubelet[1759]: E0413 23:31:18.110035 1759 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 13 23:31:18.877137 containerd[1460]: time="2026-04-13T23:31:18.877054767Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 23:31:18.877703 containerd[1460]: time="2026-04-13T23:31:18.877668952Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.31.4: active requests=0, bytes read=70611671" Apr 13 23:31:18.878558 containerd[1460]: time="2026-04-13T23:31:18.878512969Z" level=info msg="ImageCreate event name:\"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 23:31:18.881702 containerd[1460]: time="2026-04-13T23:31:18.881620303Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 23:31:18.882585 containerd[1460]: time="2026-04-13T23:31:18.882535214Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.31.4\" with image id \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\", repo tag \"ghcr.io/flatcar/calico/cni:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\", size \"72167716\" in 4.432931108s" Apr 13 23:31:18.882585 containerd[1460]: time="2026-04-13T23:31:18.882574911Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\" returns image reference \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\"" Apr 13 23:31:18.887528 containerd[1460]: time="2026-04-13T23:31:18.887442826Z" level=info msg="CreateContainer within sandbox \"3a34e81551f42ba425ceacd54af292c6ebec9c32f861a905eeb38905514f1978\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Apr 13 23:31:18.905019 containerd[1460]: time="2026-04-13T23:31:18.904968733Z" level=info msg="CreateContainer within sandbox \"3a34e81551f42ba425ceacd54af292c6ebec9c32f861a905eeb38905514f1978\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"8b384b4b662d837a7849c8339abe71583bb9555f7117abf8645cdd94d0b881c5\"" Apr 13 23:31:18.906513 containerd[1460]: time="2026-04-13T23:31:18.905438518Z" level=info msg="StartContainer for \"8b384b4b662d837a7849c8339abe71583bb9555f7117abf8645cdd94d0b881c5\"" Apr 13 23:31:18.940233 systemd[1]: Started cri-containerd-8b384b4b662d837a7849c8339abe71583bb9555f7117abf8645cdd94d0b881c5.scope - libcontainer container 8b384b4b662d837a7849c8339abe71583bb9555f7117abf8645cdd94d0b881c5. Apr 13 23:31:18.965611 containerd[1460]: time="2026-04-13T23:31:18.965552691Z" level=info msg="StartContainer for \"8b384b4b662d837a7849c8339abe71583bb9555f7117abf8645cdd94d0b881c5\" returns successfully" Apr 13 23:31:19.111015 kubelet[1759]: E0413 23:31:19.110980 1759 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 13 23:31:19.159530 kubelet[1759]: E0413 23:31:19.158920 1759 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r2cfg" podUID="db378709-9093-42e0-99e6-ba9beb70b60d" Apr 13 23:31:19.445427 systemd[1]: cri-containerd-8b384b4b662d837a7849c8339abe71583bb9555f7117abf8645cdd94d0b881c5.scope: Deactivated successfully. Apr 13 23:31:19.469138 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8b384b4b662d837a7849c8339abe71583bb9555f7117abf8645cdd94d0b881c5-rootfs.mount: Deactivated successfully. Apr 13 23:31:19.520783 kubelet[1759]: I0413 23:31:19.520720 1759 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Apr 13 23:31:19.590931 containerd[1460]: time="2026-04-13T23:31:19.590627861Z" level=info msg="shim disconnected" id=8b384b4b662d837a7849c8339abe71583bb9555f7117abf8645cdd94d0b881c5 namespace=k8s.io Apr 13 23:31:19.590931 containerd[1460]: time="2026-04-13T23:31:19.590886450Z" level=warning msg="cleaning up after shim disconnected" id=8b384b4b662d837a7849c8339abe71583bb9555f7117abf8645cdd94d0b881c5 namespace=k8s.io Apr 13 23:31:19.590931 containerd[1460]: time="2026-04-13T23:31:19.590899115Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 13 23:31:20.006501 systemd[1]: Created slice kubepods-besteffort-pod9ea06102_a332_4253_8eb9_bb324b77c2d2.slice - libcontainer container kubepods-besteffort-pod9ea06102_a332_4253_8eb9_bb324b77c2d2.slice. Apr 13 23:31:20.011397 systemd[1]: Created slice kubepods-besteffort-pod6e08b664_aec1_4d1f_aaf5_d1070a1bd37d.slice - libcontainer container kubepods-besteffort-pod6e08b664_aec1_4d1f_aaf5_d1070a1bd37d.slice. Apr 13 23:31:20.037652 systemd[1]: Created slice kubepods-besteffort-pode599cfc7_1fc3_4ebb_92b5_f669e8a23ba7.slice - libcontainer container kubepods-besteffort-pode599cfc7_1fc3_4ebb_92b5_f669e8a23ba7.slice. Apr 13 23:31:20.070500 systemd[1]: Created slice kubepods-burstable-pod16b5b403_644e_4ee0_ad2f_a49ad057b5c5.slice - libcontainer container kubepods-burstable-pod16b5b403_644e_4ee0_ad2f_a49ad057b5c5.slice. Apr 13 23:31:20.085511 systemd[1]: Created slice kubepods-besteffort-pod9d2e4322_5d1c_4d78_b98d_4acc3604c168.slice - libcontainer container kubepods-besteffort-pod9d2e4322_5d1c_4d78_b98d_4acc3604c168.slice. Apr 13 23:31:20.090554 systemd[1]: Created slice kubepods-besteffort-podcf2113af_27fe_40a0_be9c_b9d5fcb78a7b.slice - libcontainer container kubepods-besteffort-podcf2113af_27fe_40a0_be9c_b9d5fcb78a7b.slice. Apr 13 23:31:20.096129 systemd[1]: Created slice kubepods-burstable-pod30fde92e_28ad_486a_acd9_ae88e3ee265a.slice - libcontainer container kubepods-burstable-pod30fde92e_28ad_486a_acd9_ae88e3ee265a.slice. Apr 13 23:31:20.112253 kubelet[1759]: E0413 23:31:20.112159 1759 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 13 23:31:20.141066 kubelet[1759]: I0413 23:31:20.140930 1759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6e08b664-aec1-4d1f-aaf5-d1070a1bd37d-config\") pod \"goldmane-cccfbd5cf-6h5xs\" (UID: \"6e08b664-aec1-4d1f-aaf5-d1070a1bd37d\") " pod="calico-system/goldmane-cccfbd5cf-6h5xs" Apr 13 23:31:20.141267 kubelet[1759]: I0413 23:31:20.141135 1759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/6e08b664-aec1-4d1f-aaf5-d1070a1bd37d-goldmane-key-pair\") pod \"goldmane-cccfbd5cf-6h5xs\" (UID: \"6e08b664-aec1-4d1f-aaf5-d1070a1bd37d\") " pod="calico-system/goldmane-cccfbd5cf-6h5xs" Apr 13 23:31:20.141328 kubelet[1759]: I0413 23:31:20.141281 1759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jm8tl\" (UniqueName: \"kubernetes.io/projected/e599cfc7-1fc3-4ebb-92b5-f669e8a23ba7-kube-api-access-jm8tl\") pod \"calico-apiserver-654bb5c945-vfzv5\" (UID: \"e599cfc7-1fc3-4ebb-92b5-f669e8a23ba7\") " pod="calico-system/calico-apiserver-654bb5c945-vfzv5" Apr 13 23:31:20.141400 kubelet[1759]: I0413 23:31:20.141343 1759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fvvth\" (UniqueName: \"kubernetes.io/projected/9ea06102-a332-4253-8eb9-bb324b77c2d2-kube-api-access-fvvth\") pod \"calico-kube-controllers-776667d67-kwkmw\" (UID: \"9ea06102-a332-4253-8eb9-bb324b77c2d2\") " pod="calico-system/calico-kube-controllers-776667d67-kwkmw" Apr 13 23:31:20.141422 kubelet[1759]: I0413 23:31:20.141379 1759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6e08b664-aec1-4d1f-aaf5-d1070a1bd37d-goldmane-ca-bundle\") pod \"goldmane-cccfbd5cf-6h5xs\" (UID: \"6e08b664-aec1-4d1f-aaf5-d1070a1bd37d\") " pod="calico-system/goldmane-cccfbd5cf-6h5xs" Apr 13 23:31:20.141450 kubelet[1759]: I0413 23:31:20.141438 1759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/e599cfc7-1fc3-4ebb-92b5-f669e8a23ba7-calico-apiserver-certs\") pod \"calico-apiserver-654bb5c945-vfzv5\" (UID: \"e599cfc7-1fc3-4ebb-92b5-f669e8a23ba7\") " pod="calico-system/calico-apiserver-654bb5c945-vfzv5" Apr 13 23:31:20.141488 kubelet[1759]: I0413 23:31:20.141464 1759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9ea06102-a332-4253-8eb9-bb324b77c2d2-tigera-ca-bundle\") pod \"calico-kube-controllers-776667d67-kwkmw\" (UID: \"9ea06102-a332-4253-8eb9-bb324b77c2d2\") " pod="calico-system/calico-kube-controllers-776667d67-kwkmw" Apr 13 23:31:20.141508 kubelet[1759]: I0413 23:31:20.141497 1759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t2pgf\" (UniqueName: \"kubernetes.io/projected/6e08b664-aec1-4d1f-aaf5-d1070a1bd37d-kube-api-access-t2pgf\") pod \"goldmane-cccfbd5cf-6h5xs\" (UID: \"6e08b664-aec1-4d1f-aaf5-d1070a1bd37d\") " pod="calico-system/goldmane-cccfbd5cf-6h5xs" Apr 13 23:31:20.241766 kubelet[1759]: I0413 23:31:20.241703 1759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/30fde92e-28ad-486a-acd9-ae88e3ee265a-config-volume\") pod \"coredns-66bc5c9577-bwj28\" (UID: \"30fde92e-28ad-486a-acd9-ae88e3ee265a\") " pod="kube-system/coredns-66bc5c9577-bwj28" Apr 13 23:31:20.241766 kubelet[1759]: I0413 23:31:20.241742 1759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f7vcj\" (UniqueName: \"kubernetes.io/projected/30fde92e-28ad-486a-acd9-ae88e3ee265a-kube-api-access-f7vcj\") pod \"coredns-66bc5c9577-bwj28\" (UID: \"30fde92e-28ad-486a-acd9-ae88e3ee265a\") " pod="kube-system/coredns-66bc5c9577-bwj28" Apr 13 23:31:20.241766 kubelet[1759]: I0413 23:31:20.241761 1759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/cf2113af-27fe-40a0-be9c-b9d5fcb78a7b-nginx-config\") pod \"whisker-775c5f9b46-w497s\" (UID: \"cf2113af-27fe-40a0-be9c-b9d5fcb78a7b\") " pod="calico-system/whisker-775c5f9b46-w497s" Apr 13 23:31:20.241766 kubelet[1759]: I0413 23:31:20.241775 1759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/9d2e4322-5d1c-4d78-b98d-4acc3604c168-calico-apiserver-certs\") pod \"calico-apiserver-654bb5c945-wpvsk\" (UID: \"9d2e4322-5d1c-4d78-b98d-4acc3604c168\") " pod="calico-system/calico-apiserver-654bb5c945-wpvsk" Apr 13 23:31:20.242289 kubelet[1759]: I0413 23:31:20.241870 1759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/cf2113af-27fe-40a0-be9c-b9d5fcb78a7b-whisker-backend-key-pair\") pod \"whisker-775c5f9b46-w497s\" (UID: \"cf2113af-27fe-40a0-be9c-b9d5fcb78a7b\") " pod="calico-system/whisker-775c5f9b46-w497s" Apr 13 23:31:20.242475 kubelet[1759]: I0413 23:31:20.242423 1759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/16b5b403-644e-4ee0-ad2f-a49ad057b5c5-config-volume\") pod \"coredns-66bc5c9577-4fkd7\" (UID: \"16b5b403-644e-4ee0-ad2f-a49ad057b5c5\") " pod="kube-system/coredns-66bc5c9577-4fkd7" Apr 13 23:31:20.242528 kubelet[1759]: I0413 23:31:20.242495 1759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-98r7g\" (UniqueName: \"kubernetes.io/projected/cf2113af-27fe-40a0-be9c-b9d5fcb78a7b-kube-api-access-98r7g\") pod \"whisker-775c5f9b46-w497s\" (UID: \"cf2113af-27fe-40a0-be9c-b9d5fcb78a7b\") " pod="calico-system/whisker-775c5f9b46-w497s" Apr 13 23:31:20.242548 kubelet[1759]: I0413 23:31:20.242542 1759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cf2113af-27fe-40a0-be9c-b9d5fcb78a7b-whisker-ca-bundle\") pod \"whisker-775c5f9b46-w497s\" (UID: \"cf2113af-27fe-40a0-be9c-b9d5fcb78a7b\") " pod="calico-system/whisker-775c5f9b46-w497s" Apr 13 23:31:20.242598 kubelet[1759]: I0413 23:31:20.242582 1759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lk5rh\" (UniqueName: \"kubernetes.io/projected/16b5b403-644e-4ee0-ad2f-a49ad057b5c5-kube-api-access-lk5rh\") pod \"coredns-66bc5c9577-4fkd7\" (UID: \"16b5b403-644e-4ee0-ad2f-a49ad057b5c5\") " pod="kube-system/coredns-66bc5c9577-4fkd7" Apr 13 23:31:20.242618 kubelet[1759]: I0413 23:31:20.242605 1759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bxjr4\" (UniqueName: \"kubernetes.io/projected/9d2e4322-5d1c-4d78-b98d-4acc3604c168-kube-api-access-bxjr4\") pod \"calico-apiserver-654bb5c945-wpvsk\" (UID: \"9d2e4322-5d1c-4d78-b98d-4acc3604c168\") " pod="calico-system/calico-apiserver-654bb5c945-wpvsk" Apr 13 23:31:20.498775 containerd[1460]: time="2026-04-13T23:31:20.498698278Z" level=info msg="CreateContainer within sandbox \"3a34e81551f42ba425ceacd54af292c6ebec9c32f861a905eeb38905514f1978\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Apr 13 23:31:20.517408 containerd[1460]: time="2026-04-13T23:31:20.517293510Z" level=info msg="CreateContainer within sandbox \"3a34e81551f42ba425ceacd54af292c6ebec9c32f861a905eeb38905514f1978\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"2478f706f7010bb9c10bfe259086d2a71822026335d716d1d2cea0ee5024d922\"" Apr 13 23:31:20.518428 containerd[1460]: time="2026-04-13T23:31:20.518378749Z" level=info msg="StartContainer for \"2478f706f7010bb9c10bfe259086d2a71822026335d716d1d2cea0ee5024d922\"" Apr 13 23:31:20.549119 systemd[1]: Started cri-containerd-2478f706f7010bb9c10bfe259086d2a71822026335d716d1d2cea0ee5024d922.scope - libcontainer container 2478f706f7010bb9c10bfe259086d2a71822026335d716d1d2cea0ee5024d922. Apr 13 23:31:20.583829 containerd[1460]: time="2026-04-13T23:31:20.583748425Z" level=info msg="StartContainer for \"2478f706f7010bb9c10bfe259086d2a71822026335d716d1d2cea0ee5024d922\" returns successfully" Apr 13 23:31:20.614222 containerd[1460]: time="2026-04-13T23:31:20.613954412Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-776667d67-kwkmw,Uid:9ea06102-a332-4253-8eb9-bb324b77c2d2,Namespace:calico-system,Attempt:0,}" Apr 13 23:31:20.616338 containerd[1460]: time="2026-04-13T23:31:20.616148445Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-cccfbd5cf-6h5xs,Uid:6e08b664-aec1-4d1f-aaf5-d1070a1bd37d,Namespace:calico-system,Attempt:0,}" Apr 13 23:31:20.642910 containerd[1460]: time="2026-04-13T23:31:20.642842737Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-654bb5c945-vfzv5,Uid:e599cfc7-1fc3-4ebb-92b5-f669e8a23ba7,Namespace:calico-system,Attempt:0,}" Apr 13 23:31:20.686537 kubelet[1759]: E0413 23:31:20.685942 1759 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:31:20.686678 containerd[1460]: time="2026-04-13T23:31:20.686558517Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-4fkd7,Uid:16b5b403-644e-4ee0-ad2f-a49ad057b5c5,Namespace:kube-system,Attempt:0,}" Apr 13 23:31:20.692139 containerd[1460]: time="2026-04-13T23:31:20.692072228Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-654bb5c945-wpvsk,Uid:9d2e4322-5d1c-4d78-b98d-4acc3604c168,Namespace:calico-system,Attempt:0,}" Apr 13 23:31:20.696220 containerd[1460]: time="2026-04-13T23:31:20.696173531Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-775c5f9b46-w497s,Uid:cf2113af-27fe-40a0-be9c-b9d5fcb78a7b,Namespace:calico-system,Attempt:0,}" Apr 13 23:31:20.700722 kubelet[1759]: E0413 23:31:20.700692 1759 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:31:20.701532 containerd[1460]: time="2026-04-13T23:31:20.701443157Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-bwj28,Uid:30fde92e-28ad-486a-acd9-ae88e3ee265a,Namespace:kube-system,Attempt:0,}" Apr 13 23:31:20.704675 containerd[1460]: time="2026-04-13T23:31:20.704632290Z" level=error msg="Failed to destroy network for sandbox \"013acc9c1696c4ebea5ba2b0ae5d3b80fef2a92f86fde53a123fb5fe27a1ebcf\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 23:31:20.705056 containerd[1460]: time="2026-04-13T23:31:20.704980631Z" level=error msg="encountered an error cleaning up failed sandbox \"013acc9c1696c4ebea5ba2b0ae5d3b80fef2a92f86fde53a123fb5fe27a1ebcf\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 23:31:20.705358 containerd[1460]: time="2026-04-13T23:31:20.705050551Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-cccfbd5cf-6h5xs,Uid:6e08b664-aec1-4d1f-aaf5-d1070a1bd37d,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"013acc9c1696c4ebea5ba2b0ae5d3b80fef2a92f86fde53a123fb5fe27a1ebcf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 23:31:20.705636 kubelet[1759]: E0413 23:31:20.705604 1759 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"013acc9c1696c4ebea5ba2b0ae5d3b80fef2a92f86fde53a123fb5fe27a1ebcf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 23:31:20.705675 kubelet[1759]: E0413 23:31:20.705667 1759 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"013acc9c1696c4ebea5ba2b0ae5d3b80fef2a92f86fde53a123fb5fe27a1ebcf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-cccfbd5cf-6h5xs" Apr 13 23:31:20.705694 kubelet[1759]: E0413 23:31:20.705685 1759 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"013acc9c1696c4ebea5ba2b0ae5d3b80fef2a92f86fde53a123fb5fe27a1ebcf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-cccfbd5cf-6h5xs" Apr 13 23:31:20.705772 kubelet[1759]: E0413 23:31:20.705732 1759 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-cccfbd5cf-6h5xs_calico-system(6e08b664-aec1-4d1f-aaf5-d1070a1bd37d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-cccfbd5cf-6h5xs_calico-system(6e08b664-aec1-4d1f-aaf5-d1070a1bd37d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"013acc9c1696c4ebea5ba2b0ae5d3b80fef2a92f86fde53a123fb5fe27a1ebcf\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-cccfbd5cf-6h5xs" podUID="6e08b664-aec1-4d1f-aaf5-d1070a1bd37d" Apr 13 23:31:20.717411 containerd[1460]: time="2026-04-13T23:31:20.717346393Z" level=error msg="Failed to destroy network for sandbox \"b562b04fab62d415ad48b97dd24b4848a7492a19c6b3ac8058cd58d403657594\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 23:31:20.717580 containerd[1460]: time="2026-04-13T23:31:20.717419772Z" level=error msg="Failed to destroy network for sandbox \"84fbc200218f9266f64e34e744c0b040ec537cef82f14060645fe5bf12fdb3e3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 23:31:20.717695 containerd[1460]: time="2026-04-13T23:31:20.717668764Z" level=error msg="encountered an error cleaning up failed sandbox \"b562b04fab62d415ad48b97dd24b4848a7492a19c6b3ac8058cd58d403657594\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 23:31:20.717755 containerd[1460]: time="2026-04-13T23:31:20.717731340Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-654bb5c945-vfzv5,Uid:e599cfc7-1fc3-4ebb-92b5-f669e8a23ba7,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b562b04fab62d415ad48b97dd24b4848a7492a19c6b3ac8058cd58d403657594\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 23:31:20.717835 containerd[1460]: time="2026-04-13T23:31:20.717765404Z" level=error msg="encountered an error cleaning up failed sandbox \"84fbc200218f9266f64e34e744c0b040ec537cef82f14060645fe5bf12fdb3e3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 23:31:20.717887 containerd[1460]: time="2026-04-13T23:31:20.717859511Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-776667d67-kwkmw,Uid:9ea06102-a332-4253-8eb9-bb324b77c2d2,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"84fbc200218f9266f64e34e744c0b040ec537cef82f14060645fe5bf12fdb3e3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 23:31:20.718228 kubelet[1759]: E0413 23:31:20.718167 1759 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"84fbc200218f9266f64e34e744c0b040ec537cef82f14060645fe5bf12fdb3e3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 23:31:20.718306 kubelet[1759]: E0413 23:31:20.718227 1759 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"84fbc200218f9266f64e34e744c0b040ec537cef82f14060645fe5bf12fdb3e3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-776667d67-kwkmw" Apr 13 23:31:20.718306 kubelet[1759]: E0413 23:31:20.718244 1759 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"84fbc200218f9266f64e34e744c0b040ec537cef82f14060645fe5bf12fdb3e3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-776667d67-kwkmw" Apr 13 23:31:20.718306 kubelet[1759]: E0413 23:31:20.718295 1759 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-776667d67-kwkmw_calico-system(9ea06102-a332-4253-8eb9-bb324b77c2d2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-776667d67-kwkmw_calico-system(9ea06102-a332-4253-8eb9-bb324b77c2d2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"84fbc200218f9266f64e34e744c0b040ec537cef82f14060645fe5bf12fdb3e3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-776667d67-kwkmw" podUID="9ea06102-a332-4253-8eb9-bb324b77c2d2" Apr 13 23:31:20.718406 kubelet[1759]: E0413 23:31:20.718168 1759 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b562b04fab62d415ad48b97dd24b4848a7492a19c6b3ac8058cd58d403657594\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 23:31:20.718406 kubelet[1759]: E0413 23:31:20.718328 1759 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b562b04fab62d415ad48b97dd24b4848a7492a19c6b3ac8058cd58d403657594\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-654bb5c945-vfzv5" Apr 13 23:31:20.718406 kubelet[1759]: E0413 23:31:20.718336 1759 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b562b04fab62d415ad48b97dd24b4848a7492a19c6b3ac8058cd58d403657594\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-654bb5c945-vfzv5" Apr 13 23:31:20.718455 kubelet[1759]: E0413 23:31:20.718353 1759 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-654bb5c945-vfzv5_calico-system(e599cfc7-1fc3-4ebb-92b5-f669e8a23ba7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-654bb5c945-vfzv5_calico-system(e599cfc7-1fc3-4ebb-92b5-f669e8a23ba7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b562b04fab62d415ad48b97dd24b4848a7492a19c6b3ac8058cd58d403657594\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-654bb5c945-vfzv5" podUID="e599cfc7-1fc3-4ebb-92b5-f669e8a23ba7" Apr 13 23:31:20.861609 containerd[1460]: time="2026-04-13T23:31:20.861373803Z" level=error msg="Failed to destroy network for sandbox \"744fda8a20f0f42d30edb3c3763a11f9a741001bc21842ba19591f6b17a201ee\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 23:31:20.861609 containerd[1460]: time="2026-04-13T23:31:20.861430821Z" level=error msg="Failed to destroy network for sandbox \"46edbdfd0413c9ada98cb42ab4127e3c74aff1738e0c45d4c688689503439c58\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 23:31:20.862337 containerd[1460]: time="2026-04-13T23:31:20.862273791Z" level=error msg="encountered an error cleaning up failed sandbox \"744fda8a20f0f42d30edb3c3763a11f9a741001bc21842ba19591f6b17a201ee\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 23:31:20.862431 containerd[1460]: time="2026-04-13T23:31:20.862415459Z" level=error msg="encountered an error cleaning up failed sandbox \"46edbdfd0413c9ada98cb42ab4127e3c74aff1738e0c45d4c688689503439c58\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 23:31:20.862543 containerd[1460]: time="2026-04-13T23:31:20.862505592Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-4fkd7,Uid:16b5b403-644e-4ee0-ad2f-a49ad057b5c5,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"46edbdfd0413c9ada98cb42ab4127e3c74aff1738e0c45d4c688689503439c58\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 23:31:20.862643 containerd[1460]: time="2026-04-13T23:31:20.862629279Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-bwj28,Uid:30fde92e-28ad-486a-acd9-ae88e3ee265a,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"744fda8a20f0f42d30edb3c3763a11f9a741001bc21842ba19591f6b17a201ee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 23:31:20.862922 kubelet[1759]: E0413 23:31:20.862888 1759 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"744fda8a20f0f42d30edb3c3763a11f9a741001bc21842ba19591f6b17a201ee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 23:31:20.862968 kubelet[1759]: E0413 23:31:20.862941 1759 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"744fda8a20f0f42d30edb3c3763a11f9a741001bc21842ba19591f6b17a201ee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-bwj28" Apr 13 23:31:20.862968 kubelet[1759]: E0413 23:31:20.862958 1759 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"744fda8a20f0f42d30edb3c3763a11f9a741001bc21842ba19591f6b17a201ee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-bwj28" Apr 13 23:31:20.863024 kubelet[1759]: E0413 23:31:20.863001 1759 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-bwj28_kube-system(30fde92e-28ad-486a-acd9-ae88e3ee265a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-bwj28_kube-system(30fde92e-28ad-486a-acd9-ae88e3ee265a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"744fda8a20f0f42d30edb3c3763a11f9a741001bc21842ba19591f6b17a201ee\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-bwj28" podUID="30fde92e-28ad-486a-acd9-ae88e3ee265a" Apr 13 23:31:20.863084 kubelet[1759]: E0413 23:31:20.863039 1759 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"46edbdfd0413c9ada98cb42ab4127e3c74aff1738e0c45d4c688689503439c58\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 23:31:20.863084 kubelet[1759]: E0413 23:31:20.863050 1759 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"46edbdfd0413c9ada98cb42ab4127e3c74aff1738e0c45d4c688689503439c58\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-4fkd7" Apr 13 23:31:20.863084 kubelet[1759]: E0413 23:31:20.863059 1759 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"46edbdfd0413c9ada98cb42ab4127e3c74aff1738e0c45d4c688689503439c58\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-4fkd7" Apr 13 23:31:20.863158 kubelet[1759]: E0413 23:31:20.863080 1759 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-4fkd7_kube-system(16b5b403-644e-4ee0-ad2f-a49ad057b5c5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-4fkd7_kube-system(16b5b403-644e-4ee0-ad2f-a49ad057b5c5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"46edbdfd0413c9ada98cb42ab4127e3c74aff1738e0c45d4c688689503439c58\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-4fkd7" podUID="16b5b403-644e-4ee0-ad2f-a49ad057b5c5" Apr 13 23:31:20.863368 containerd[1460]: time="2026-04-13T23:31:20.863306585Z" level=error msg="Failed to destroy network for sandbox \"3d7111feb919ab4d4213d8afd7ee0b044f019fbf19eb1bf5ce7c53ec5a273c62\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 23:31:20.863779 containerd[1460]: time="2026-04-13T23:31:20.863626300Z" level=error msg="encountered an error cleaning up failed sandbox \"3d7111feb919ab4d4213d8afd7ee0b044f019fbf19eb1bf5ce7c53ec5a273c62\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 23:31:20.863919 containerd[1460]: time="2026-04-13T23:31:20.863887203Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-654bb5c945-wpvsk,Uid:9d2e4322-5d1c-4d78-b98d-4acc3604c168,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3d7111feb919ab4d4213d8afd7ee0b044f019fbf19eb1bf5ce7c53ec5a273c62\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 23:31:20.864127 kubelet[1759]: E0413 23:31:20.864072 1759 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3d7111feb919ab4d4213d8afd7ee0b044f019fbf19eb1bf5ce7c53ec5a273c62\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 23:31:20.864197 kubelet[1759]: E0413 23:31:20.864147 1759 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3d7111feb919ab4d4213d8afd7ee0b044f019fbf19eb1bf5ce7c53ec5a273c62\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-654bb5c945-wpvsk" Apr 13 23:31:20.864197 kubelet[1759]: E0413 23:31:20.864166 1759 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3d7111feb919ab4d4213d8afd7ee0b044f019fbf19eb1bf5ce7c53ec5a273c62\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-654bb5c945-wpvsk" Apr 13 23:31:20.864232 kubelet[1759]: E0413 23:31:20.864208 1759 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-654bb5c945-wpvsk_calico-system(9d2e4322-5d1c-4d78-b98d-4acc3604c168)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-654bb5c945-wpvsk_calico-system(9d2e4322-5d1c-4d78-b98d-4acc3604c168)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3d7111feb919ab4d4213d8afd7ee0b044f019fbf19eb1bf5ce7c53ec5a273c62\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-654bb5c945-wpvsk" podUID="9d2e4322-5d1c-4d78-b98d-4acc3604c168" Apr 13 23:31:20.864763 containerd[1460]: time="2026-04-13T23:31:20.864702412Z" level=error msg="Failed to destroy network for sandbox \"4ea0757ed1eb351f83ab325bbc604c63a3307769691e69f6f9ae73a05f8d9ed4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 23:31:20.865046 containerd[1460]: time="2026-04-13T23:31:20.865005506Z" level=error msg="encountered an error cleaning up failed sandbox \"4ea0757ed1eb351f83ab325bbc604c63a3307769691e69f6f9ae73a05f8d9ed4\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 23:31:20.865072 containerd[1460]: time="2026-04-13T23:31:20.865059138Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-775c5f9b46-w497s,Uid:cf2113af-27fe-40a0-be9c-b9d5fcb78a7b,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"4ea0757ed1eb351f83ab325bbc604c63a3307769691e69f6f9ae73a05f8d9ed4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 23:31:20.865242 kubelet[1759]: E0413 23:31:20.865211 1759 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4ea0757ed1eb351f83ab325bbc604c63a3307769691e69f6f9ae73a05f8d9ed4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 23:31:20.865344 kubelet[1759]: E0413 23:31:20.865248 1759 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4ea0757ed1eb351f83ab325bbc604c63a3307769691e69f6f9ae73a05f8d9ed4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-775c5f9b46-w497s" Apr 13 23:31:20.865344 kubelet[1759]: E0413 23:31:20.865262 1759 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4ea0757ed1eb351f83ab325bbc604c63a3307769691e69f6f9ae73a05f8d9ed4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-775c5f9b46-w497s" Apr 13 23:31:20.865344 kubelet[1759]: E0413 23:31:20.865309 1759 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-775c5f9b46-w497s_calico-system(cf2113af-27fe-40a0-be9c-b9d5fcb78a7b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-775c5f9b46-w497s_calico-system(cf2113af-27fe-40a0-be9c-b9d5fcb78a7b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4ea0757ed1eb351f83ab325bbc604c63a3307769691e69f6f9ae73a05f8d9ed4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-775c5f9b46-w497s" podUID="cf2113af-27fe-40a0-be9c-b9d5fcb78a7b" Apr 13 23:31:21.112544 kubelet[1759]: E0413 23:31:21.112337 1759 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 13 23:31:21.177192 systemd[1]: Created slice kubepods-besteffort-poddb378709_9093_42e0_99e6_ba9beb70b60d.slice - libcontainer container kubepods-besteffort-poddb378709_9093_42e0_99e6_ba9beb70b60d.slice. Apr 13 23:31:21.182234 containerd[1460]: time="2026-04-13T23:31:21.182184669Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-r2cfg,Uid:db378709-9093-42e0-99e6-ba9beb70b60d,Namespace:calico-system,Attempt:0,}" Apr 13 23:31:21.286080 containerd[1460]: time="2026-04-13T23:31:21.286014780Z" level=error msg="Failed to destroy network for sandbox \"85433dd274c9b8979fbc01f89358003fd1f330040c5457c5aa83cb6f050ffea3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 23:31:21.287127 containerd[1460]: time="2026-04-13T23:31:21.287065615Z" level=error msg="encountered an error cleaning up failed sandbox \"85433dd274c9b8979fbc01f89358003fd1f330040c5457c5aa83cb6f050ffea3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 23:31:21.287208 containerd[1460]: time="2026-04-13T23:31:21.287153100Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-r2cfg,Uid:db378709-9093-42e0-99e6-ba9beb70b60d,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"85433dd274c9b8979fbc01f89358003fd1f330040c5457c5aa83cb6f050ffea3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 23:31:21.287494 kubelet[1759]: E0413 23:31:21.287465 1759 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"85433dd274c9b8979fbc01f89358003fd1f330040c5457c5aa83cb6f050ffea3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 23:31:21.287573 kubelet[1759]: E0413 23:31:21.287512 1759 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"85433dd274c9b8979fbc01f89358003fd1f330040c5457c5aa83cb6f050ffea3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-r2cfg" Apr 13 23:31:21.287573 kubelet[1759]: E0413 23:31:21.287530 1759 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"85433dd274c9b8979fbc01f89358003fd1f330040c5457c5aa83cb6f050ffea3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-r2cfg" Apr 13 23:31:21.287629 kubelet[1759]: E0413 23:31:21.287569 1759 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-r2cfg_calico-system(db378709-9093-42e0-99e6-ba9beb70b60d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-r2cfg_calico-system(db378709-9093-42e0-99e6-ba9beb70b60d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"85433dd274c9b8979fbc01f89358003fd1f330040c5457c5aa83cb6f050ffea3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-r2cfg" podUID="db378709-9093-42e0-99e6-ba9beb70b60d" Apr 13 23:31:21.288272 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-85433dd274c9b8979fbc01f89358003fd1f330040c5457c5aa83cb6f050ffea3-shm.mount: Deactivated successfully. Apr 13 23:31:21.495468 kubelet[1759]: I0413 23:31:21.495295 1759 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4ea0757ed1eb351f83ab325bbc604c63a3307769691e69f6f9ae73a05f8d9ed4" Apr 13 23:31:21.496584 containerd[1460]: time="2026-04-13T23:31:21.496518301Z" level=info msg="StopPodSandbox for \"4ea0757ed1eb351f83ab325bbc604c63a3307769691e69f6f9ae73a05f8d9ed4\"" Apr 13 23:31:21.496727 containerd[1460]: time="2026-04-13T23:31:21.496711785Z" level=info msg="Ensure that sandbox 4ea0757ed1eb351f83ab325bbc604c63a3307769691e69f6f9ae73a05f8d9ed4 in task-service has been cleanup successfully" Apr 13 23:31:21.498700 kubelet[1759]: I0413 23:31:21.498510 1759 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="46edbdfd0413c9ada98cb42ab4127e3c74aff1738e0c45d4c688689503439c58" Apr 13 23:31:21.501119 kubelet[1759]: I0413 23:31:21.501064 1759 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b562b04fab62d415ad48b97dd24b4848a7492a19c6b3ac8058cd58d403657594" Apr 13 23:31:21.501563 containerd[1460]: time="2026-04-13T23:31:21.501500245Z" level=info msg="StopPodSandbox for \"b562b04fab62d415ad48b97dd24b4848a7492a19c6b3ac8058cd58d403657594\"" Apr 13 23:31:21.501864 containerd[1460]: time="2026-04-13T23:31:21.501661708Z" level=info msg="Ensure that sandbox b562b04fab62d415ad48b97dd24b4848a7492a19c6b3ac8058cd58d403657594 in task-service has been cleanup successfully" Apr 13 23:31:21.506038 kubelet[1759]: I0413 23:31:21.504541 1759 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="013acc9c1696c4ebea5ba2b0ae5d3b80fef2a92f86fde53a123fb5fe27a1ebcf" Apr 13 23:31:21.506308 containerd[1460]: time="2026-04-13T23:31:21.506243730Z" level=info msg="StopPodSandbox for \"013acc9c1696c4ebea5ba2b0ae5d3b80fef2a92f86fde53a123fb5fe27a1ebcf\"" Apr 13 23:31:21.506442 containerd[1460]: time="2026-04-13T23:31:21.506418822Z" level=info msg="Ensure that sandbox 013acc9c1696c4ebea5ba2b0ae5d3b80fef2a92f86fde53a123fb5fe27a1ebcf in task-service has been cleanup successfully" Apr 13 23:31:21.506886 containerd[1460]: time="2026-04-13T23:31:21.506870135Z" level=info msg="StopPodSandbox for \"46edbdfd0413c9ada98cb42ab4127e3c74aff1738e0c45d4c688689503439c58\"" Apr 13 23:31:21.507677 containerd[1460]: time="2026-04-13T23:31:21.507450414Z" level=info msg="Ensure that sandbox 46edbdfd0413c9ada98cb42ab4127e3c74aff1738e0c45d4c688689503439c58 in task-service has been cleanup successfully" Apr 13 23:31:21.508475 kubelet[1759]: I0413 23:31:21.508073 1759 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="84fbc200218f9266f64e34e744c0b040ec537cef82f14060645fe5bf12fdb3e3" Apr 13 23:31:21.509285 containerd[1460]: time="2026-04-13T23:31:21.509213673Z" level=info msg="StopPodSandbox for \"84fbc200218f9266f64e34e744c0b040ec537cef82f14060645fe5bf12fdb3e3\"" Apr 13 23:31:21.509489 containerd[1460]: time="2026-04-13T23:31:21.509442904Z" level=info msg="Ensure that sandbox 84fbc200218f9266f64e34e744c0b040ec537cef82f14060645fe5bf12fdb3e3 in task-service has been cleanup successfully" Apr 13 23:31:21.511129 kubelet[1759]: I0413 23:31:21.511052 1759 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="85433dd274c9b8979fbc01f89358003fd1f330040c5457c5aa83cb6f050ffea3" Apr 13 23:31:21.512520 containerd[1460]: time="2026-04-13T23:31:21.512504525Z" level=info msg="StopPodSandbox for \"85433dd274c9b8979fbc01f89358003fd1f330040c5457c5aa83cb6f050ffea3\"" Apr 13 23:31:21.512668 containerd[1460]: time="2026-04-13T23:31:21.512657404Z" level=info msg="Ensure that sandbox 85433dd274c9b8979fbc01f89358003fd1f330040c5457c5aa83cb6f050ffea3 in task-service has been cleanup successfully" Apr 13 23:31:21.516079 kubelet[1759]: I0413 23:31:21.516004 1759 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="744fda8a20f0f42d30edb3c3763a11f9a741001bc21842ba19591f6b17a201ee" Apr 13 23:31:21.516680 containerd[1460]: time="2026-04-13T23:31:21.516497036Z" level=info msg="StopPodSandbox for \"744fda8a20f0f42d30edb3c3763a11f9a741001bc21842ba19591f6b17a201ee\"" Apr 13 23:31:21.518887 containerd[1460]: time="2026-04-13T23:31:21.516743492Z" level=info msg="Ensure that sandbox 744fda8a20f0f42d30edb3c3763a11f9a741001bc21842ba19591f6b17a201ee in task-service has been cleanup successfully" Apr 13 23:31:21.522985 kubelet[1759]: I0413 23:31:21.522965 1759 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3d7111feb919ab4d4213d8afd7ee0b044f019fbf19eb1bf5ce7c53ec5a273c62" Apr 13 23:31:21.523913 containerd[1460]: time="2026-04-13T23:31:21.523620649Z" level=info msg="StopPodSandbox for \"3d7111feb919ab4d4213d8afd7ee0b044f019fbf19eb1bf5ce7c53ec5a273c62\"" Apr 13 23:31:21.523913 containerd[1460]: time="2026-04-13T23:31:21.523740784Z" level=info msg="Ensure that sandbox 3d7111feb919ab4d4213d8afd7ee0b044f019fbf19eb1bf5ce7c53ec5a273c62 in task-service has been cleanup successfully" Apr 13 23:31:21.596152 containerd[1460]: time="2026-04-13T23:31:21.596066232Z" level=error msg="StopPodSandbox for \"3d7111feb919ab4d4213d8afd7ee0b044f019fbf19eb1bf5ce7c53ec5a273c62\" failed" error="failed to destroy network for sandbox \"3d7111feb919ab4d4213d8afd7ee0b044f019fbf19eb1bf5ce7c53ec5a273c62\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 23:31:21.597271 containerd[1460]: time="2026-04-13T23:31:21.597203716Z" level=error msg="StopPodSandbox for \"85433dd274c9b8979fbc01f89358003fd1f330040c5457c5aa83cb6f050ffea3\" failed" error="failed to destroy network for sandbox \"85433dd274c9b8979fbc01f89358003fd1f330040c5457c5aa83cb6f050ffea3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 23:31:21.599142 kubelet[1759]: E0413 23:31:21.599072 1759 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"85433dd274c9b8979fbc01f89358003fd1f330040c5457c5aa83cb6f050ffea3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="85433dd274c9b8979fbc01f89358003fd1f330040c5457c5aa83cb6f050ffea3" Apr 13 23:31:21.599240 kubelet[1759]: E0413 23:31:21.599157 1759 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"85433dd274c9b8979fbc01f89358003fd1f330040c5457c5aa83cb6f050ffea3"} Apr 13 23:31:21.599240 kubelet[1759]: E0413 23:31:21.599216 1759 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"db378709-9093-42e0-99e6-ba9beb70b60d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"85433dd274c9b8979fbc01f89358003fd1f330040c5457c5aa83cb6f050ffea3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 13 23:31:21.599322 kubelet[1759]: E0413 23:31:21.599250 1759 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"db378709-9093-42e0-99e6-ba9beb70b60d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"85433dd274c9b8979fbc01f89358003fd1f330040c5457c5aa83cb6f050ffea3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-r2cfg" podUID="db378709-9093-42e0-99e6-ba9beb70b60d" Apr 13 23:31:21.599820 kubelet[1759]: E0413 23:31:21.599382 1759 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"3d7111feb919ab4d4213d8afd7ee0b044f019fbf19eb1bf5ce7c53ec5a273c62\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="3d7111feb919ab4d4213d8afd7ee0b044f019fbf19eb1bf5ce7c53ec5a273c62" Apr 13 23:31:21.599820 kubelet[1759]: E0413 23:31:21.599443 1759 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"3d7111feb919ab4d4213d8afd7ee0b044f019fbf19eb1bf5ce7c53ec5a273c62"} Apr 13 23:31:21.599820 kubelet[1759]: E0413 23:31:21.599463 1759 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"9d2e4322-5d1c-4d78-b98d-4acc3604c168\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3d7111feb919ab4d4213d8afd7ee0b044f019fbf19eb1bf5ce7c53ec5a273c62\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 13 23:31:21.599820 kubelet[1759]: E0413 23:31:21.599482 1759 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"9d2e4322-5d1c-4d78-b98d-4acc3604c168\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3d7111feb919ab4d4213d8afd7ee0b044f019fbf19eb1bf5ce7c53ec5a273c62\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-654bb5c945-wpvsk" podUID="9d2e4322-5d1c-4d78-b98d-4acc3604c168" Apr 13 23:31:21.602995 containerd[1460]: time="2026-04-13T23:31:21.602956115Z" level=error msg="StopPodSandbox for \"744fda8a20f0f42d30edb3c3763a11f9a741001bc21842ba19591f6b17a201ee\" failed" error="failed to destroy network for sandbox \"744fda8a20f0f42d30edb3c3763a11f9a741001bc21842ba19591f6b17a201ee\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 23:31:21.603357 kubelet[1759]: E0413 23:31:21.603319 1759 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"744fda8a20f0f42d30edb3c3763a11f9a741001bc21842ba19591f6b17a201ee\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="744fda8a20f0f42d30edb3c3763a11f9a741001bc21842ba19591f6b17a201ee" Apr 13 23:31:21.603404 kubelet[1759]: E0413 23:31:21.603366 1759 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"744fda8a20f0f42d30edb3c3763a11f9a741001bc21842ba19591f6b17a201ee"} Apr 13 23:31:21.603426 kubelet[1759]: E0413 23:31:21.603399 1759 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"30fde92e-28ad-486a-acd9-ae88e3ee265a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"744fda8a20f0f42d30edb3c3763a11f9a741001bc21842ba19591f6b17a201ee\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 13 23:31:21.603473 kubelet[1759]: E0413 23:31:21.603439 1759 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"30fde92e-28ad-486a-acd9-ae88e3ee265a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"744fda8a20f0f42d30edb3c3763a11f9a741001bc21842ba19591f6b17a201ee\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-bwj28" podUID="30fde92e-28ad-486a-acd9-ae88e3ee265a" Apr 13 23:31:21.604850 containerd[1460]: time="2026-04-13T23:31:21.604762887Z" level=error msg="StopPodSandbox for \"b562b04fab62d415ad48b97dd24b4848a7492a19c6b3ac8058cd58d403657594\" failed" error="failed to destroy network for sandbox \"b562b04fab62d415ad48b97dd24b4848a7492a19c6b3ac8058cd58d403657594\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 23:31:21.605742 kubelet[1759]: E0413 23:31:21.605378 1759 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b562b04fab62d415ad48b97dd24b4848a7492a19c6b3ac8058cd58d403657594\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b562b04fab62d415ad48b97dd24b4848a7492a19c6b3ac8058cd58d403657594" Apr 13 23:31:21.605742 kubelet[1759]: E0413 23:31:21.605463 1759 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b562b04fab62d415ad48b97dd24b4848a7492a19c6b3ac8058cd58d403657594"} Apr 13 23:31:21.605742 kubelet[1759]: E0413 23:31:21.605536 1759 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e599cfc7-1fc3-4ebb-92b5-f669e8a23ba7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b562b04fab62d415ad48b97dd24b4848a7492a19c6b3ac8058cd58d403657594\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 13 23:31:21.605958 kubelet[1759]: E0413 23:31:21.605835 1759 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e599cfc7-1fc3-4ebb-92b5-f669e8a23ba7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b562b04fab62d415ad48b97dd24b4848a7492a19c6b3ac8058cd58d403657594\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-654bb5c945-vfzv5" podUID="e599cfc7-1fc3-4ebb-92b5-f669e8a23ba7" Apr 13 23:31:21.606996 containerd[1460]: time="2026-04-13T23:31:21.606953911Z" level=error msg="StopPodSandbox for \"4ea0757ed1eb351f83ab325bbc604c63a3307769691e69f6f9ae73a05f8d9ed4\" failed" error="failed to destroy network for sandbox \"4ea0757ed1eb351f83ab325bbc604c63a3307769691e69f6f9ae73a05f8d9ed4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 23:31:21.607394 kubelet[1759]: E0413 23:31:21.607300 1759 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"4ea0757ed1eb351f83ab325bbc604c63a3307769691e69f6f9ae73a05f8d9ed4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="4ea0757ed1eb351f83ab325bbc604c63a3307769691e69f6f9ae73a05f8d9ed4" Apr 13 23:31:21.607394 kubelet[1759]: E0413 23:31:21.607331 1759 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"4ea0757ed1eb351f83ab325bbc604c63a3307769691e69f6f9ae73a05f8d9ed4"} Apr 13 23:31:21.607394 kubelet[1759]: E0413 23:31:21.607349 1759 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"cf2113af-27fe-40a0-be9c-b9d5fcb78a7b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4ea0757ed1eb351f83ab325bbc604c63a3307769691e69f6f9ae73a05f8d9ed4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 13 23:31:21.607394 kubelet[1759]: E0413 23:31:21.607371 1759 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"cf2113af-27fe-40a0-be9c-b9d5fcb78a7b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4ea0757ed1eb351f83ab325bbc604c63a3307769691e69f6f9ae73a05f8d9ed4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-775c5f9b46-w497s" podUID="cf2113af-27fe-40a0-be9c-b9d5fcb78a7b" Apr 13 23:31:21.612470 containerd[1460]: time="2026-04-13T23:31:21.612424327Z" level=error msg="StopPodSandbox for \"84fbc200218f9266f64e34e744c0b040ec537cef82f14060645fe5bf12fdb3e3\" failed" error="failed to destroy network for sandbox \"84fbc200218f9266f64e34e744c0b040ec537cef82f14060645fe5bf12fdb3e3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 23:31:21.612847 kubelet[1759]: E0413 23:31:21.612773 1759 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"84fbc200218f9266f64e34e744c0b040ec537cef82f14060645fe5bf12fdb3e3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="84fbc200218f9266f64e34e744c0b040ec537cef82f14060645fe5bf12fdb3e3" Apr 13 23:31:21.612847 kubelet[1759]: E0413 23:31:21.612835 1759 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"84fbc200218f9266f64e34e744c0b040ec537cef82f14060645fe5bf12fdb3e3"} Apr 13 23:31:21.612952 kubelet[1759]: E0413 23:31:21.612854 1759 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"9ea06102-a332-4253-8eb9-bb324b77c2d2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"84fbc200218f9266f64e34e744c0b040ec537cef82f14060645fe5bf12fdb3e3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 13 23:31:21.612952 kubelet[1759]: E0413 23:31:21.612875 1759 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"9ea06102-a332-4253-8eb9-bb324b77c2d2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"84fbc200218f9266f64e34e744c0b040ec537cef82f14060645fe5bf12fdb3e3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-776667d67-kwkmw" podUID="9ea06102-a332-4253-8eb9-bb324b77c2d2" Apr 13 23:31:21.613408 containerd[1460]: time="2026-04-13T23:31:21.613336175Z" level=error msg="StopPodSandbox for \"013acc9c1696c4ebea5ba2b0ae5d3b80fef2a92f86fde53a123fb5fe27a1ebcf\" failed" error="failed to destroy network for sandbox \"013acc9c1696c4ebea5ba2b0ae5d3b80fef2a92f86fde53a123fb5fe27a1ebcf\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 23:31:21.613524 kubelet[1759]: E0413 23:31:21.613492 1759 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"013acc9c1696c4ebea5ba2b0ae5d3b80fef2a92f86fde53a123fb5fe27a1ebcf\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="013acc9c1696c4ebea5ba2b0ae5d3b80fef2a92f86fde53a123fb5fe27a1ebcf" Apr 13 23:31:21.613524 kubelet[1759]: E0413 23:31:21.613523 1759 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"013acc9c1696c4ebea5ba2b0ae5d3b80fef2a92f86fde53a123fb5fe27a1ebcf"} Apr 13 23:31:21.613601 kubelet[1759]: E0413 23:31:21.613537 1759 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"6e08b664-aec1-4d1f-aaf5-d1070a1bd37d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"013acc9c1696c4ebea5ba2b0ae5d3b80fef2a92f86fde53a123fb5fe27a1ebcf\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 13 23:31:21.613601 kubelet[1759]: E0413 23:31:21.613551 1759 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"6e08b664-aec1-4d1f-aaf5-d1070a1bd37d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"013acc9c1696c4ebea5ba2b0ae5d3b80fef2a92f86fde53a123fb5fe27a1ebcf\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-cccfbd5cf-6h5xs" podUID="6e08b664-aec1-4d1f-aaf5-d1070a1bd37d" Apr 13 23:31:21.616679 containerd[1460]: time="2026-04-13T23:31:21.616613681Z" level=error msg="StopPodSandbox for \"46edbdfd0413c9ada98cb42ab4127e3c74aff1738e0c45d4c688689503439c58\" failed" error="failed to destroy network for sandbox \"46edbdfd0413c9ada98cb42ab4127e3c74aff1738e0c45d4c688689503439c58\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 23:31:21.616847 kubelet[1759]: E0413 23:31:21.616825 1759 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"46edbdfd0413c9ada98cb42ab4127e3c74aff1738e0c45d4c688689503439c58\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="46edbdfd0413c9ada98cb42ab4127e3c74aff1738e0c45d4c688689503439c58" Apr 13 23:31:21.616900 kubelet[1759]: E0413 23:31:21.616849 1759 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"46edbdfd0413c9ada98cb42ab4127e3c74aff1738e0c45d4c688689503439c58"} Apr 13 23:31:21.616900 kubelet[1759]: E0413 23:31:21.616866 1759 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"16b5b403-644e-4ee0-ad2f-a49ad057b5c5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"46edbdfd0413c9ada98cb42ab4127e3c74aff1738e0c45d4c688689503439c58\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 13 23:31:21.616900 kubelet[1759]: E0413 23:31:21.616885 1759 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"16b5b403-644e-4ee0-ad2f-a49ad057b5c5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"46edbdfd0413c9ada98cb42ab4127e3c74aff1738e0c45d4c688689503439c58\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-4fkd7" podUID="16b5b403-644e-4ee0-ad2f-a49ad057b5c5" Apr 13 23:31:21.785838 kubelet[1759]: I0413 23:31:21.785697 1759 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-z9jhc" podStartSLOduration=4.109270892 podStartE2EDuration="21.785663111s" podCreationTimestamp="2026-04-13 23:31:00 +0000 UTC" firstStartedPulling="2026-04-13 23:31:01.207022677 +0000 UTC m=+66.247293823" lastFinishedPulling="2026-04-13 23:31:18.88341489 +0000 UTC m=+83.923686042" observedRunningTime="2026-04-13 23:31:21.76732919 +0000 UTC m=+86.807600351" watchObservedRunningTime="2026-04-13 23:31:21.785663111 +0000 UTC m=+86.825934265" Apr 13 23:31:22.113713 kubelet[1759]: E0413 23:31:22.113492 1759 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 13 23:31:22.537413 containerd[1460]: time="2026-04-13T23:31:22.537357040Z" level=info msg="StopPodSandbox for \"4ea0757ed1eb351f83ab325bbc604c63a3307769691e69f6f9ae73a05f8d9ed4\"" Apr 13 23:31:23.014842 containerd[1460]: 2026-04-13 23:31:22.781 [INFO][3037] cni-plugin/k8s.go 639: Endpoint was modified before it could be deleted. Retrying... ContainerID="4ea0757ed1eb351f83ab325bbc604c63a3307769691e69f6f9ae73a05f8d9ed4" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.139-k8s-whisker--775c5f9b46--w497s-eth0", GenerateName:"whisker-775c5f9b46-", Namespace:"calico-system", SelfLink:"", UID:"cf2113af-27fe-40a0-be9c-b9d5fcb78a7b", ResourceVersion:"1321", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 23, 31, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"775c5f9b46", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.139", ContainerID:"", Pod:"whisker-775c5f9b46-w497s", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calib1c0666b44f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 23:31:23.014842 containerd[1460]: 2026-04-13 23:31:22.901 [INFO][3037] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="4ea0757ed1eb351f83ab325bbc604c63a3307769691e69f6f9ae73a05f8d9ed4" Apr 13 23:31:23.014842 containerd[1460]: 2026-04-13 23:31:22.901 [INFO][3037] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="4ea0757ed1eb351f83ab325bbc604c63a3307769691e69f6f9ae73a05f8d9ed4" iface="eth0" netns="/var/run/netns/cni-596a30a0-1c57-6e09-6dd0-59edaedbaa02" Apr 13 23:31:23.014842 containerd[1460]: 2026-04-13 23:31:22.902 [INFO][3037] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="4ea0757ed1eb351f83ab325bbc604c63a3307769691e69f6f9ae73a05f8d9ed4" iface="eth0" netns="/var/run/netns/cni-596a30a0-1c57-6e09-6dd0-59edaedbaa02" Apr 13 23:31:23.014842 containerd[1460]: 2026-04-13 23:31:22.902 [INFO][3037] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="4ea0757ed1eb351f83ab325bbc604c63a3307769691e69f6f9ae73a05f8d9ed4" iface="eth0" netns="/var/run/netns/cni-596a30a0-1c57-6e09-6dd0-59edaedbaa02" Apr 13 23:31:23.014842 containerd[1460]: 2026-04-13 23:31:22.902 [INFO][3037] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="4ea0757ed1eb351f83ab325bbc604c63a3307769691e69f6f9ae73a05f8d9ed4" Apr 13 23:31:23.014842 containerd[1460]: 2026-04-13 23:31:22.902 [INFO][3037] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="4ea0757ed1eb351f83ab325bbc604c63a3307769691e69f6f9ae73a05f8d9ed4" Apr 13 23:31:23.014842 containerd[1460]: 2026-04-13 23:31:22.925 [INFO][3068] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="4ea0757ed1eb351f83ab325bbc604c63a3307769691e69f6f9ae73a05f8d9ed4" HandleID="k8s-pod-network.4ea0757ed1eb351f83ab325bbc604c63a3307769691e69f6f9ae73a05f8d9ed4" Workload="10.0.0.139-k8s-whisker--775c5f9b46--w497s-eth0" Apr 13 23:31:23.014842 containerd[1460]: 2026-04-13 23:31:22.926 [INFO][3068] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 23:31:23.014842 containerd[1460]: 2026-04-13 23:31:22.926 [INFO][3068] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 23:31:23.014842 containerd[1460]: 2026-04-13 23:31:22.988 [WARNING][3068] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="4ea0757ed1eb351f83ab325bbc604c63a3307769691e69f6f9ae73a05f8d9ed4" HandleID="k8s-pod-network.4ea0757ed1eb351f83ab325bbc604c63a3307769691e69f6f9ae73a05f8d9ed4" Workload="10.0.0.139-k8s-whisker--775c5f9b46--w497s-eth0" Apr 13 23:31:23.014842 containerd[1460]: 2026-04-13 23:31:22.989 [INFO][3068] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="4ea0757ed1eb351f83ab325bbc604c63a3307769691e69f6f9ae73a05f8d9ed4" HandleID="k8s-pod-network.4ea0757ed1eb351f83ab325bbc604c63a3307769691e69f6f9ae73a05f8d9ed4" Workload="10.0.0.139-k8s-whisker--775c5f9b46--w497s-eth0" Apr 13 23:31:23.014842 containerd[1460]: 2026-04-13 23:31:23.009 [INFO][3068] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 23:31:23.014842 containerd[1460]: 2026-04-13 23:31:23.012 [INFO][3037] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="4ea0757ed1eb351f83ab325bbc604c63a3307769691e69f6f9ae73a05f8d9ed4" Apr 13 23:31:23.018137 containerd[1460]: time="2026-04-13T23:31:23.018032209Z" level=info msg="TearDown network for sandbox \"4ea0757ed1eb351f83ab325bbc604c63a3307769691e69f6f9ae73a05f8d9ed4\" successfully" Apr 13 23:31:23.018137 containerd[1460]: time="2026-04-13T23:31:23.018115426Z" level=info msg="StopPodSandbox for \"4ea0757ed1eb351f83ab325bbc604c63a3307769691e69f6f9ae73a05f8d9ed4\" returns successfully" Apr 13 23:31:23.018564 systemd[1]: run-netns-cni\x2d596a30a0\x2d1c57\x2d6e09\x2d6dd0\x2d59edaedbaa02.mount: Deactivated successfully. Apr 13 23:31:23.107475 kubelet[1759]: I0413 23:31:23.107017 1759 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cf2113af-27fe-40a0-be9c-b9d5fcb78a7b-whisker-ca-bundle\") pod \"cf2113af-27fe-40a0-be9c-b9d5fcb78a7b\" (UID: \"cf2113af-27fe-40a0-be9c-b9d5fcb78a7b\") " Apr 13 23:31:23.107475 kubelet[1759]: I0413 23:31:23.107075 1759 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/cf2113af-27fe-40a0-be9c-b9d5fcb78a7b-nginx-config\") pod \"cf2113af-27fe-40a0-be9c-b9d5fcb78a7b\" (UID: \"cf2113af-27fe-40a0-be9c-b9d5fcb78a7b\") " Apr 13 23:31:23.107475 kubelet[1759]: I0413 23:31:23.107118 1759 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-98r7g\" (UniqueName: \"kubernetes.io/projected/cf2113af-27fe-40a0-be9c-b9d5fcb78a7b-kube-api-access-98r7g\") pod \"cf2113af-27fe-40a0-be9c-b9d5fcb78a7b\" (UID: \"cf2113af-27fe-40a0-be9c-b9d5fcb78a7b\") " Apr 13 23:31:23.107475 kubelet[1759]: I0413 23:31:23.107137 1759 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/cf2113af-27fe-40a0-be9c-b9d5fcb78a7b-whisker-backend-key-pair\") pod \"cf2113af-27fe-40a0-be9c-b9d5fcb78a7b\" (UID: \"cf2113af-27fe-40a0-be9c-b9d5fcb78a7b\") " Apr 13 23:31:23.107475 kubelet[1759]: I0413 23:31:23.107414 1759 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cf2113af-27fe-40a0-be9c-b9d5fcb78a7b-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "cf2113af-27fe-40a0-be9c-b9d5fcb78a7b" (UID: "cf2113af-27fe-40a0-be9c-b9d5fcb78a7b"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 13 23:31:23.107723 kubelet[1759]: I0413 23:31:23.107460 1759 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cf2113af-27fe-40a0-be9c-b9d5fcb78a7b-nginx-config" (OuterVolumeSpecName: "nginx-config") pod "cf2113af-27fe-40a0-be9c-b9d5fcb78a7b" (UID: "cf2113af-27fe-40a0-be9c-b9d5fcb78a7b"). InnerVolumeSpecName "nginx-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 13 23:31:23.112178 systemd[1]: var-lib-kubelet-pods-cf2113af\x2d27fe\x2d40a0\x2dbe9c\x2db9d5fcb78a7b-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d98r7g.mount: Deactivated successfully. Apr 13 23:31:23.112273 systemd[1]: var-lib-kubelet-pods-cf2113af\x2d27fe\x2d40a0\x2dbe9c\x2db9d5fcb78a7b-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Apr 13 23:31:23.113045 kubelet[1759]: I0413 23:31:23.112582 1759 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf2113af-27fe-40a0-be9c-b9d5fcb78a7b-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "cf2113af-27fe-40a0-be9c-b9d5fcb78a7b" (UID: "cf2113af-27fe-40a0-be9c-b9d5fcb78a7b"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 13 23:31:23.113045 kubelet[1759]: I0413 23:31:23.112594 1759 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf2113af-27fe-40a0-be9c-b9d5fcb78a7b-kube-api-access-98r7g" (OuterVolumeSpecName: "kube-api-access-98r7g") pod "cf2113af-27fe-40a0-be9c-b9d5fcb78a7b" (UID: "cf2113af-27fe-40a0-be9c-b9d5fcb78a7b"). InnerVolumeSpecName "kube-api-access-98r7g". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 13 23:31:23.113854 kubelet[1759]: E0413 23:31:23.113759 1759 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 13 23:31:23.208278 kubelet[1759]: I0413 23:31:23.208233 1759 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cf2113af-27fe-40a0-be9c-b9d5fcb78a7b-whisker-ca-bundle\") on node \"10.0.0.139\" DevicePath \"\"" Apr 13 23:31:23.208278 kubelet[1759]: I0413 23:31:23.208267 1759 reconciler_common.go:299] "Volume detached for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/cf2113af-27fe-40a0-be9c-b9d5fcb78a7b-nginx-config\") on node \"10.0.0.139\" DevicePath \"\"" Apr 13 23:31:23.208278 kubelet[1759]: I0413 23:31:23.208278 1759 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-98r7g\" (UniqueName: \"kubernetes.io/projected/cf2113af-27fe-40a0-be9c-b9d5fcb78a7b-kube-api-access-98r7g\") on node \"10.0.0.139\" DevicePath \"\"" Apr 13 23:31:23.208278 kubelet[1759]: I0413 23:31:23.208296 1759 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/cf2113af-27fe-40a0-be9c-b9d5fcb78a7b-whisker-backend-key-pair\") on node \"10.0.0.139\" DevicePath \"\"" Apr 13 23:31:23.555522 systemd[1]: Removed slice kubepods-besteffort-podcf2113af_27fe_40a0_be9c_b9d5fcb78a7b.slice - libcontainer container kubepods-besteffort-podcf2113af_27fe_40a0_be9c_b9d5fcb78a7b.slice. Apr 13 23:31:24.114080 kubelet[1759]: E0413 23:31:24.113965 1759 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 13 23:31:24.161265 kubelet[1759]: I0413 23:31:24.161182 1759 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cf2113af-27fe-40a0-be9c-b9d5fcb78a7b" path="/var/lib/kubelet/pods/cf2113af-27fe-40a0-be9c-b9d5fcb78a7b/volumes" Apr 13 23:31:24.192295 systemd[1]: Created slice kubepods-besteffort-pod9cf7cccb_1824_4a1d_ab06_c7ffe303249f.slice - libcontainer container kubepods-besteffort-pod9cf7cccb_1824_4a1d_ab06_c7ffe303249f.slice. Apr 13 23:31:24.216469 kubelet[1759]: I0413 23:31:24.216404 1759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/9cf7cccb-1824-4a1d-ab06-c7ffe303249f-whisker-backend-key-pair\") pod \"whisker-7b5696d7ff-kd6rv\" (UID: \"9cf7cccb-1824-4a1d-ab06-c7ffe303249f\") " pod="calico-system/whisker-7b5696d7ff-kd6rv" Apr 13 23:31:24.216469 kubelet[1759]: I0413 23:31:24.216462 1759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9cf7cccb-1824-4a1d-ab06-c7ffe303249f-whisker-ca-bundle\") pod \"whisker-7b5696d7ff-kd6rv\" (UID: \"9cf7cccb-1824-4a1d-ab06-c7ffe303249f\") " pod="calico-system/whisker-7b5696d7ff-kd6rv" Apr 13 23:31:24.216469 kubelet[1759]: I0413 23:31:24.216496 1759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xs6bh\" (UniqueName: \"kubernetes.io/projected/9cf7cccb-1824-4a1d-ab06-c7ffe303249f-kube-api-access-xs6bh\") pod \"whisker-7b5696d7ff-kd6rv\" (UID: \"9cf7cccb-1824-4a1d-ab06-c7ffe303249f\") " pod="calico-system/whisker-7b5696d7ff-kd6rv" Apr 13 23:31:24.216469 kubelet[1759]: I0413 23:31:24.216520 1759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/9cf7cccb-1824-4a1d-ab06-c7ffe303249f-nginx-config\") pod \"whisker-7b5696d7ff-kd6rv\" (UID: \"9cf7cccb-1824-4a1d-ab06-c7ffe303249f\") " pod="calico-system/whisker-7b5696d7ff-kd6rv" Apr 13 23:31:24.503712 containerd[1460]: time="2026-04-13T23:31:24.503354136Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7b5696d7ff-kd6rv,Uid:9cf7cccb-1824-4a1d-ab06-c7ffe303249f,Namespace:calico-system,Attempt:0,}" Apr 13 23:31:24.909856 kernel: calico-node[3185]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Apr 13 23:31:25.115309 kubelet[1759]: E0413 23:31:25.115223 1759 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 13 23:31:25.314919 systemd-networkd[1386]: vxlan.calico: Link UP Apr 13 23:31:25.314933 systemd-networkd[1386]: vxlan.calico: Gained carrier Apr 13 23:31:25.332064 systemd-networkd[1386]: cali0392f9c8f37: Link UP Apr 13 23:31:25.332211 systemd-networkd[1386]: cali0392f9c8f37: Gained carrier Apr 13 23:31:25.415214 containerd[1460]: 2026-04-13 23:31:24.583 [ERROR][3199] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 13 23:31:25.415214 containerd[1460]: 2026-04-13 23:31:24.649 [INFO][3199] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.139-k8s-whisker--7b5696d7ff--kd6rv-eth0 whisker-7b5696d7ff- calico-system 9cf7cccb-1824-4a1d-ab06-c7ffe303249f 1356 0 2026-04-13 23:31:24 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:7b5696d7ff projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s 10.0.0.139 whisker-7b5696d7ff-kd6rv eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali0392f9c8f37 [] [] }} ContainerID="59f25e714437009771d1c7ad8afc8228e83ecf9a0f83cdb4b984a4aa63e624f6" Namespace="calico-system" Pod="whisker-7b5696d7ff-kd6rv" WorkloadEndpoint="10.0.0.139-k8s-whisker--7b5696d7ff--kd6rv-" Apr 13 23:31:25.415214 containerd[1460]: 2026-04-13 23:31:24.649 [INFO][3199] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="59f25e714437009771d1c7ad8afc8228e83ecf9a0f83cdb4b984a4aa63e624f6" Namespace="calico-system" Pod="whisker-7b5696d7ff-kd6rv" WorkloadEndpoint="10.0.0.139-k8s-whisker--7b5696d7ff--kd6rv-eth0" Apr 13 23:31:25.415214 containerd[1460]: 2026-04-13 23:31:24.747 [INFO][3232] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="59f25e714437009771d1c7ad8afc8228e83ecf9a0f83cdb4b984a4aa63e624f6" HandleID="k8s-pod-network.59f25e714437009771d1c7ad8afc8228e83ecf9a0f83cdb4b984a4aa63e624f6" Workload="10.0.0.139-k8s-whisker--7b5696d7ff--kd6rv-eth0" Apr 13 23:31:25.415214 containerd[1460]: 2026-04-13 23:31:24.806 [INFO][3232] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="59f25e714437009771d1c7ad8afc8228e83ecf9a0f83cdb4b984a4aa63e624f6" HandleID="k8s-pod-network.59f25e714437009771d1c7ad8afc8228e83ecf9a0f83cdb4b984a4aa63e624f6" Workload="10.0.0.139-k8s-whisker--7b5696d7ff--kd6rv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002efb00), Attrs:map[string]string{"namespace":"calico-system", "node":"10.0.0.139", "pod":"whisker-7b5696d7ff-kd6rv", "timestamp":"2026-04-13 23:31:24.747624048 +0000 UTC"}, Hostname:"10.0.0.139", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000192dc0)} Apr 13 23:31:25.415214 containerd[1460]: 2026-04-13 23:31:24.807 [INFO][3232] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 23:31:25.415214 containerd[1460]: 2026-04-13 23:31:24.807 [INFO][3232] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 23:31:25.415214 containerd[1460]: 2026-04-13 23:31:24.807 [INFO][3232] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.139' Apr 13 23:31:25.415214 containerd[1460]: 2026-04-13 23:31:24.832 [INFO][3232] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.59f25e714437009771d1c7ad8afc8228e83ecf9a0f83cdb4b984a4aa63e624f6" host="10.0.0.139" Apr 13 23:31:25.415214 containerd[1460]: 2026-04-13 23:31:24.857 [INFO][3232] ipam/ipam.go 409: Looking up existing affinities for host host="10.0.0.139" Apr 13 23:31:25.415214 containerd[1460]: 2026-04-13 23:31:24.954 [INFO][3232] ipam/ipam.go 526: Trying affinity for 192.168.100.0/26 host="10.0.0.139" Apr 13 23:31:25.415214 containerd[1460]: 2026-04-13 23:31:25.002 [INFO][3232] ipam/ipam.go 160: Attempting to load block cidr=192.168.100.0/26 host="10.0.0.139" Apr 13 23:31:25.415214 containerd[1460]: 2026-04-13 23:31:25.017 [INFO][3232] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.100.0/26 host="10.0.0.139" Apr 13 23:31:25.415214 containerd[1460]: 2026-04-13 23:31:25.018 [INFO][3232] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.100.0/26 handle="k8s-pod-network.59f25e714437009771d1c7ad8afc8228e83ecf9a0f83cdb4b984a4aa63e624f6" host="10.0.0.139" Apr 13 23:31:25.415214 containerd[1460]: 2026-04-13 23:31:25.123 [INFO][3232] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.59f25e714437009771d1c7ad8afc8228e83ecf9a0f83cdb4b984a4aa63e624f6 Apr 13 23:31:25.415214 containerd[1460]: 2026-04-13 23:31:25.249 [INFO][3232] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.100.0/26 handle="k8s-pod-network.59f25e714437009771d1c7ad8afc8228e83ecf9a0f83cdb4b984a4aa63e624f6" host="10.0.0.139" Apr 13 23:31:25.415214 containerd[1460]: 2026-04-13 23:31:25.320 [INFO][3232] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.100.1/26] block=192.168.100.0/26 handle="k8s-pod-network.59f25e714437009771d1c7ad8afc8228e83ecf9a0f83cdb4b984a4aa63e624f6" host="10.0.0.139" Apr 13 23:31:25.415214 containerd[1460]: 2026-04-13 23:31:25.320 [INFO][3232] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.100.1/26] handle="k8s-pod-network.59f25e714437009771d1c7ad8afc8228e83ecf9a0f83cdb4b984a4aa63e624f6" host="10.0.0.139" Apr 13 23:31:25.415214 containerd[1460]: 2026-04-13 23:31:25.320 [INFO][3232] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 23:31:25.415214 containerd[1460]: 2026-04-13 23:31:25.320 [INFO][3232] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.100.1/26] IPv6=[] ContainerID="59f25e714437009771d1c7ad8afc8228e83ecf9a0f83cdb4b984a4aa63e624f6" HandleID="k8s-pod-network.59f25e714437009771d1c7ad8afc8228e83ecf9a0f83cdb4b984a4aa63e624f6" Workload="10.0.0.139-k8s-whisker--7b5696d7ff--kd6rv-eth0" Apr 13 23:31:25.415974 containerd[1460]: 2026-04-13 23:31:25.322 [INFO][3199] cni-plugin/k8s.go 418: Populated endpoint ContainerID="59f25e714437009771d1c7ad8afc8228e83ecf9a0f83cdb4b984a4aa63e624f6" Namespace="calico-system" Pod="whisker-7b5696d7ff-kd6rv" WorkloadEndpoint="10.0.0.139-k8s-whisker--7b5696d7ff--kd6rv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.139-k8s-whisker--7b5696d7ff--kd6rv-eth0", GenerateName:"whisker-7b5696d7ff-", Namespace:"calico-system", SelfLink:"", UID:"9cf7cccb-1824-4a1d-ab06-c7ffe303249f", ResourceVersion:"1356", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 23, 31, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7b5696d7ff", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.139", ContainerID:"", Pod:"whisker-7b5696d7ff-kd6rv", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.100.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali0392f9c8f37", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 23:31:25.415974 containerd[1460]: 2026-04-13 23:31:25.322 [INFO][3199] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.100.1/32] ContainerID="59f25e714437009771d1c7ad8afc8228e83ecf9a0f83cdb4b984a4aa63e624f6" Namespace="calico-system" Pod="whisker-7b5696d7ff-kd6rv" WorkloadEndpoint="10.0.0.139-k8s-whisker--7b5696d7ff--kd6rv-eth0" Apr 13 23:31:25.415974 containerd[1460]: 2026-04-13 23:31:25.322 [INFO][3199] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0392f9c8f37 ContainerID="59f25e714437009771d1c7ad8afc8228e83ecf9a0f83cdb4b984a4aa63e624f6" Namespace="calico-system" Pod="whisker-7b5696d7ff-kd6rv" WorkloadEndpoint="10.0.0.139-k8s-whisker--7b5696d7ff--kd6rv-eth0" Apr 13 23:31:25.415974 containerd[1460]: 2026-04-13 23:31:25.330 [INFO][3199] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="59f25e714437009771d1c7ad8afc8228e83ecf9a0f83cdb4b984a4aa63e624f6" Namespace="calico-system" Pod="whisker-7b5696d7ff-kd6rv" WorkloadEndpoint="10.0.0.139-k8s-whisker--7b5696d7ff--kd6rv-eth0" Apr 13 23:31:25.415974 containerd[1460]: 2026-04-13 23:31:25.331 [INFO][3199] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="59f25e714437009771d1c7ad8afc8228e83ecf9a0f83cdb4b984a4aa63e624f6" Namespace="calico-system" Pod="whisker-7b5696d7ff-kd6rv" WorkloadEndpoint="10.0.0.139-k8s-whisker--7b5696d7ff--kd6rv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.139-k8s-whisker--7b5696d7ff--kd6rv-eth0", GenerateName:"whisker-7b5696d7ff-", Namespace:"calico-system", SelfLink:"", UID:"9cf7cccb-1824-4a1d-ab06-c7ffe303249f", ResourceVersion:"1356", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 23, 31, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7b5696d7ff", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.139", ContainerID:"59f25e714437009771d1c7ad8afc8228e83ecf9a0f83cdb4b984a4aa63e624f6", Pod:"whisker-7b5696d7ff-kd6rv", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.100.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali0392f9c8f37", MAC:"02:73:cf:a0:a1:e1", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 23:31:25.415974 containerd[1460]: 2026-04-13 23:31:25.412 [INFO][3199] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="59f25e714437009771d1c7ad8afc8228e83ecf9a0f83cdb4b984a4aa63e624f6" Namespace="calico-system" Pod="whisker-7b5696d7ff-kd6rv" WorkloadEndpoint="10.0.0.139-k8s-whisker--7b5696d7ff--kd6rv-eth0" Apr 13 23:31:25.504093 containerd[1460]: time="2026-04-13T23:31:25.503924332Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 23:31:25.504093 containerd[1460]: time="2026-04-13T23:31:25.504007578Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 23:31:25.504093 containerd[1460]: time="2026-04-13T23:31:25.504044694Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 23:31:25.504534 containerd[1460]: time="2026-04-13T23:31:25.504157098Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 23:31:25.552527 systemd[1]: Started cri-containerd-59f25e714437009771d1c7ad8afc8228e83ecf9a0f83cdb4b984a4aa63e624f6.scope - libcontainer container 59f25e714437009771d1c7ad8afc8228e83ecf9a0f83cdb4b984a4aa63e624f6. Apr 13 23:31:25.567452 systemd-resolved[1389]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 13 23:31:25.606981 containerd[1460]: time="2026-04-13T23:31:25.606787347Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7b5696d7ff-kd6rv,Uid:9cf7cccb-1824-4a1d-ab06-c7ffe303249f,Namespace:calico-system,Attempt:0,} returns sandbox id \"59f25e714437009771d1c7ad8afc8228e83ecf9a0f83cdb4b984a4aa63e624f6\"" Apr 13 23:31:25.609955 containerd[1460]: time="2026-04-13T23:31:25.609895682Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\"" Apr 13 23:31:26.116418 kubelet[1759]: E0413 23:31:26.116276 1759 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 13 23:31:27.104721 systemd-networkd[1386]: vxlan.calico: Gained IPv6LL Apr 13 23:31:27.117183 kubelet[1759]: E0413 23:31:27.117092 1759 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 13 23:31:27.296312 systemd-networkd[1386]: cali0392f9c8f37: Gained IPv6LL Apr 13 23:31:27.466121 containerd[1460]: time="2026-04-13T23:31:27.465918722Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 23:31:27.466687 containerd[1460]: time="2026-04-13T23:31:27.466633456Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.31.4: active requests=0, bytes read=6039889" Apr 13 23:31:27.467529 containerd[1460]: time="2026-04-13T23:31:27.467478511Z" level=info msg="ImageCreate event name:\"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 23:31:27.469257 containerd[1460]: time="2026-04-13T23:31:27.469203203Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 23:31:27.469977 containerd[1460]: time="2026-04-13T23:31:27.469932603Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.31.4\" with image id \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\", size \"7595926\" in 1.859991119s" Apr 13 23:31:27.469977 containerd[1460]: time="2026-04-13T23:31:27.469972583Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\" returns image reference \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\"" Apr 13 23:31:27.474454 containerd[1460]: time="2026-04-13T23:31:27.474415762Z" level=info msg="CreateContainer within sandbox \"59f25e714437009771d1c7ad8afc8228e83ecf9a0f83cdb4b984a4aa63e624f6\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Apr 13 23:31:27.487557 containerd[1460]: time="2026-04-13T23:31:27.487488683Z" level=info msg="CreateContainer within sandbox \"59f25e714437009771d1c7ad8afc8228e83ecf9a0f83cdb4b984a4aa63e624f6\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"a4565af1138a0b197a51419f41734eda50fa5d950691c64aa3490b11cd999054\"" Apr 13 23:31:27.488300 containerd[1460]: time="2026-04-13T23:31:27.488269301Z" level=info msg="StartContainer for \"a4565af1138a0b197a51419f41734eda50fa5d950691c64aa3490b11cd999054\"" Apr 13 23:31:27.518170 systemd[1]: Started cri-containerd-a4565af1138a0b197a51419f41734eda50fa5d950691c64aa3490b11cd999054.scope - libcontainer container a4565af1138a0b197a51419f41734eda50fa5d950691c64aa3490b11cd999054. Apr 13 23:31:27.556205 containerd[1460]: time="2026-04-13T23:31:27.556127209Z" level=info msg="StartContainer for \"a4565af1138a0b197a51419f41734eda50fa5d950691c64aa3490b11cd999054\" returns successfully" Apr 13 23:31:27.558083 containerd[1460]: time="2026-04-13T23:31:27.557925438Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\"" Apr 13 23:31:28.117758 kubelet[1759]: E0413 23:31:28.117668 1759 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 13 23:31:29.118536 kubelet[1759]: E0413 23:31:29.118462 1759 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 13 23:31:29.844542 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2892185410.mount: Deactivated successfully. Apr 13 23:31:29.869683 containerd[1460]: time="2026-04-13T23:31:29.869494323Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 23:31:29.870209 containerd[1460]: time="2026-04-13T23:31:29.870144033Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.31.4: active requests=0, bytes read=17609475" Apr 13 23:31:29.871063 containerd[1460]: time="2026-04-13T23:31:29.871014675Z" level=info msg="ImageCreate event name:\"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 23:31:29.873231 containerd[1460]: time="2026-04-13T23:31:29.873160359Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 23:31:29.874080 containerd[1460]: time="2026-04-13T23:31:29.874053433Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" with image id \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\", size \"17609305\" in 2.316100502s" Apr 13 23:31:29.874151 containerd[1460]: time="2026-04-13T23:31:29.874085828Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" returns image reference \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\"" Apr 13 23:31:29.879248 containerd[1460]: time="2026-04-13T23:31:29.879202416Z" level=info msg="CreateContainer within sandbox \"59f25e714437009771d1c7ad8afc8228e83ecf9a0f83cdb4b984a4aa63e624f6\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Apr 13 23:31:29.893057 containerd[1460]: time="2026-04-13T23:31:29.893001238Z" level=info msg="CreateContainer within sandbox \"59f25e714437009771d1c7ad8afc8228e83ecf9a0f83cdb4b984a4aa63e624f6\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"dd45a17bf043379360df05fa9640810c936673a7fcd9e91d9b4cb08519e719bb\"" Apr 13 23:31:29.893709 containerd[1460]: time="2026-04-13T23:31:29.893675282Z" level=info msg="StartContainer for \"dd45a17bf043379360df05fa9640810c936673a7fcd9e91d9b4cb08519e719bb\"" Apr 13 23:31:29.924078 systemd[1]: Started cri-containerd-dd45a17bf043379360df05fa9640810c936673a7fcd9e91d9b4cb08519e719bb.scope - libcontainer container dd45a17bf043379360df05fa9640810c936673a7fcd9e91d9b4cb08519e719bb. Apr 13 23:31:29.972865 containerd[1460]: time="2026-04-13T23:31:29.972736098Z" level=info msg="StartContainer for \"dd45a17bf043379360df05fa9640810c936673a7fcd9e91d9b4cb08519e719bb\" returns successfully" Apr 13 23:31:30.119035 kubelet[1759]: E0413 23:31:30.118778 1759 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 13 23:31:30.879855 kubelet[1759]: I0413 23:31:30.879703 1759 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-7b5696d7ff-kd6rv" podStartSLOduration=2.613624707 podStartE2EDuration="6.879687557s" podCreationTimestamp="2026-04-13 23:31:24 +0000 UTC" firstStartedPulling="2026-04-13 23:31:25.609070236 +0000 UTC m=+90.649341389" lastFinishedPulling="2026-04-13 23:31:29.875133094 +0000 UTC m=+94.915404239" observedRunningTime="2026-04-13 23:31:30.873181894 +0000 UTC m=+95.913453052" watchObservedRunningTime="2026-04-13 23:31:30.879687557 +0000 UTC m=+95.919958720" Apr 13 23:31:31.120122 kubelet[1759]: E0413 23:31:31.120005 1759 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 13 23:31:32.121268 kubelet[1759]: E0413 23:31:32.120961 1759 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 13 23:31:32.160156 containerd[1460]: time="2026-04-13T23:31:32.160061750Z" level=info msg="StopPodSandbox for \"3d7111feb919ab4d4213d8afd7ee0b044f019fbf19eb1bf5ce7c53ec5a273c62\"" Apr 13 23:31:32.164770 containerd[1460]: time="2026-04-13T23:31:32.164699332Z" level=info msg="StopPodSandbox for \"46edbdfd0413c9ada98cb42ab4127e3c74aff1738e0c45d4c688689503439c58\"" Apr 13 23:31:32.912676 containerd[1460]: 2026-04-13 23:31:32.650 [INFO][3539] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="46edbdfd0413c9ada98cb42ab4127e3c74aff1738e0c45d4c688689503439c58" Apr 13 23:31:32.912676 containerd[1460]: 2026-04-13 23:31:32.658 [INFO][3539] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="46edbdfd0413c9ada98cb42ab4127e3c74aff1738e0c45d4c688689503439c58" iface="eth0" netns="/var/run/netns/cni-a898004f-9893-31ad-61bf-03f8c403bd4f" Apr 13 23:31:32.912676 containerd[1460]: 2026-04-13 23:31:32.660 [INFO][3539] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="46edbdfd0413c9ada98cb42ab4127e3c74aff1738e0c45d4c688689503439c58" iface="eth0" netns="/var/run/netns/cni-a898004f-9893-31ad-61bf-03f8c403bd4f" Apr 13 23:31:32.912676 containerd[1460]: 2026-04-13 23:31:32.661 [INFO][3539] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="46edbdfd0413c9ada98cb42ab4127e3c74aff1738e0c45d4c688689503439c58" iface="eth0" netns="/var/run/netns/cni-a898004f-9893-31ad-61bf-03f8c403bd4f" Apr 13 23:31:32.912676 containerd[1460]: 2026-04-13 23:31:32.661 [INFO][3539] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="46edbdfd0413c9ada98cb42ab4127e3c74aff1738e0c45d4c688689503439c58" Apr 13 23:31:32.912676 containerd[1460]: 2026-04-13 23:31:32.661 [INFO][3539] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="46edbdfd0413c9ada98cb42ab4127e3c74aff1738e0c45d4c688689503439c58" Apr 13 23:31:32.912676 containerd[1460]: 2026-04-13 23:31:32.710 [INFO][3551] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="46edbdfd0413c9ada98cb42ab4127e3c74aff1738e0c45d4c688689503439c58" HandleID="k8s-pod-network.46edbdfd0413c9ada98cb42ab4127e3c74aff1738e0c45d4c688689503439c58" Workload="10.0.0.139-k8s-coredns--66bc5c9577--4fkd7-eth0" Apr 13 23:31:32.912676 containerd[1460]: 2026-04-13 23:31:32.710 [INFO][3551] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 23:31:32.912676 containerd[1460]: 2026-04-13 23:31:32.710 [INFO][3551] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 23:31:32.912676 containerd[1460]: 2026-04-13 23:31:32.841 [WARNING][3551] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="46edbdfd0413c9ada98cb42ab4127e3c74aff1738e0c45d4c688689503439c58" HandleID="k8s-pod-network.46edbdfd0413c9ada98cb42ab4127e3c74aff1738e0c45d4c688689503439c58" Workload="10.0.0.139-k8s-coredns--66bc5c9577--4fkd7-eth0" Apr 13 23:31:32.912676 containerd[1460]: 2026-04-13 23:31:32.841 [INFO][3551] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="46edbdfd0413c9ada98cb42ab4127e3c74aff1738e0c45d4c688689503439c58" HandleID="k8s-pod-network.46edbdfd0413c9ada98cb42ab4127e3c74aff1738e0c45d4c688689503439c58" Workload="10.0.0.139-k8s-coredns--66bc5c9577--4fkd7-eth0" Apr 13 23:31:32.912676 containerd[1460]: 2026-04-13 23:31:32.910 [INFO][3551] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 23:31:32.912676 containerd[1460]: 2026-04-13 23:31:32.911 [INFO][3539] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="46edbdfd0413c9ada98cb42ab4127e3c74aff1738e0c45d4c688689503439c58" Apr 13 23:31:32.913054 containerd[1460]: time="2026-04-13T23:31:32.912999584Z" level=info msg="TearDown network for sandbox \"46edbdfd0413c9ada98cb42ab4127e3c74aff1738e0c45d4c688689503439c58\" successfully" Apr 13 23:31:32.913054 containerd[1460]: time="2026-04-13T23:31:32.913035205Z" level=info msg="StopPodSandbox for \"46edbdfd0413c9ada98cb42ab4127e3c74aff1738e0c45d4c688689503439c58\" returns successfully" Apr 13 23:31:32.915383 systemd[1]: run-netns-cni\x2da898004f\x2d9893\x2d31ad\x2d61bf\x2d03f8c403bd4f.mount: Deactivated successfully. Apr 13 23:31:32.915617 kubelet[1759]: E0413 23:31:32.915567 1759 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:31:32.916078 containerd[1460]: time="2026-04-13T23:31:32.916043349Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-4fkd7,Uid:16b5b403-644e-4ee0-ad2f-a49ad057b5c5,Namespace:kube-system,Attempt:1,}" Apr 13 23:31:33.103602 containerd[1460]: 2026-04-13 23:31:32.751 [INFO][3523] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="3d7111feb919ab4d4213d8afd7ee0b044f019fbf19eb1bf5ce7c53ec5a273c62" Apr 13 23:31:33.103602 containerd[1460]: 2026-04-13 23:31:32.752 [INFO][3523] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="3d7111feb919ab4d4213d8afd7ee0b044f019fbf19eb1bf5ce7c53ec5a273c62" iface="eth0" netns="/var/run/netns/cni-65a65ef2-d067-3229-666f-2a81adf1ae54" Apr 13 23:31:33.103602 containerd[1460]: 2026-04-13 23:31:32.752 [INFO][3523] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="3d7111feb919ab4d4213d8afd7ee0b044f019fbf19eb1bf5ce7c53ec5a273c62" iface="eth0" netns="/var/run/netns/cni-65a65ef2-d067-3229-666f-2a81adf1ae54" Apr 13 23:31:33.103602 containerd[1460]: 2026-04-13 23:31:32.752 [INFO][3523] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="3d7111feb919ab4d4213d8afd7ee0b044f019fbf19eb1bf5ce7c53ec5a273c62" iface="eth0" netns="/var/run/netns/cni-65a65ef2-d067-3229-666f-2a81adf1ae54" Apr 13 23:31:33.103602 containerd[1460]: 2026-04-13 23:31:32.752 [INFO][3523] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="3d7111feb919ab4d4213d8afd7ee0b044f019fbf19eb1bf5ce7c53ec5a273c62" Apr 13 23:31:33.103602 containerd[1460]: 2026-04-13 23:31:32.752 [INFO][3523] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="3d7111feb919ab4d4213d8afd7ee0b044f019fbf19eb1bf5ce7c53ec5a273c62" Apr 13 23:31:33.103602 containerd[1460]: 2026-04-13 23:31:32.775 [INFO][3559] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="3d7111feb919ab4d4213d8afd7ee0b044f019fbf19eb1bf5ce7c53ec5a273c62" HandleID="k8s-pod-network.3d7111feb919ab4d4213d8afd7ee0b044f019fbf19eb1bf5ce7c53ec5a273c62" Workload="10.0.0.139-k8s-calico--apiserver--654bb5c945--wpvsk-eth0" Apr 13 23:31:33.103602 containerd[1460]: 2026-04-13 23:31:32.775 [INFO][3559] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 23:31:33.103602 containerd[1460]: 2026-04-13 23:31:32.910 [INFO][3559] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 23:31:33.103602 containerd[1460]: 2026-04-13 23:31:33.030 [WARNING][3559] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="3d7111feb919ab4d4213d8afd7ee0b044f019fbf19eb1bf5ce7c53ec5a273c62" HandleID="k8s-pod-network.3d7111feb919ab4d4213d8afd7ee0b044f019fbf19eb1bf5ce7c53ec5a273c62" Workload="10.0.0.139-k8s-calico--apiserver--654bb5c945--wpvsk-eth0" Apr 13 23:31:33.103602 containerd[1460]: 2026-04-13 23:31:33.030 [INFO][3559] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="3d7111feb919ab4d4213d8afd7ee0b044f019fbf19eb1bf5ce7c53ec5a273c62" HandleID="k8s-pod-network.3d7111feb919ab4d4213d8afd7ee0b044f019fbf19eb1bf5ce7c53ec5a273c62" Workload="10.0.0.139-k8s-calico--apiserver--654bb5c945--wpvsk-eth0" Apr 13 23:31:33.103602 containerd[1460]: 2026-04-13 23:31:33.099 [INFO][3559] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 23:31:33.103602 containerd[1460]: 2026-04-13 23:31:33.102 [INFO][3523] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="3d7111feb919ab4d4213d8afd7ee0b044f019fbf19eb1bf5ce7c53ec5a273c62" Apr 13 23:31:33.104164 containerd[1460]: time="2026-04-13T23:31:33.103862977Z" level=info msg="TearDown network for sandbox \"3d7111feb919ab4d4213d8afd7ee0b044f019fbf19eb1bf5ce7c53ec5a273c62\" successfully" Apr 13 23:31:33.104164 containerd[1460]: time="2026-04-13T23:31:33.103902720Z" level=info msg="StopPodSandbox for \"3d7111feb919ab4d4213d8afd7ee0b044f019fbf19eb1bf5ce7c53ec5a273c62\" returns successfully" Apr 13 23:31:33.107033 systemd[1]: run-netns-cni\x2d65a65ef2\x2dd067\x2d3229\x2d666f\x2d2a81adf1ae54.mount: Deactivated successfully. Apr 13 23:31:33.107685 containerd[1460]: time="2026-04-13T23:31:33.107345631Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-654bb5c945-wpvsk,Uid:9d2e4322-5d1c-4d78-b98d-4acc3604c168,Namespace:calico-system,Attempt:1,}" Apr 13 23:31:33.122334 kubelet[1759]: E0413 23:31:33.122260 1759 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 13 23:31:33.160909 containerd[1460]: time="2026-04-13T23:31:33.160185813Z" level=info msg="StopPodSandbox for \"85433dd274c9b8979fbc01f89358003fd1f330040c5457c5aa83cb6f050ffea3\"" Apr 13 23:31:33.160909 containerd[1460]: time="2026-04-13T23:31:33.160621663Z" level=info msg="StopPodSandbox for \"84fbc200218f9266f64e34e744c0b040ec537cef82f14060645fe5bf12fdb3e3\"" Apr 13 23:31:34.123064 kubelet[1759]: E0413 23:31:34.122940 1759 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 13 23:31:35.123917 kubelet[1759]: E0413 23:31:35.123783 1759 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 13 23:31:35.574691 systemd-networkd[1386]: cali7cb6ec363e7: Link UP Apr 13 23:31:35.576357 systemd-networkd[1386]: cali7cb6ec363e7: Gained carrier Apr 13 23:31:35.988645 kubelet[1759]: E0413 23:31:35.988369 1759 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 13 23:31:36.124225 kubelet[1759]: E0413 23:31:36.124032 1759 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 13 23:31:36.162711 containerd[1460]: time="2026-04-13T23:31:36.162668391Z" level=info msg="StopPodSandbox for \"013acc9c1696c4ebea5ba2b0ae5d3b80fef2a92f86fde53a123fb5fe27a1ebcf\"" Apr 13 23:31:36.290749 containerd[1460]: 2026-04-13 23:31:33.956 [INFO][3611] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="84fbc200218f9266f64e34e744c0b040ec537cef82f14060645fe5bf12fdb3e3" Apr 13 23:31:36.290749 containerd[1460]: 2026-04-13 23:31:33.956 [INFO][3611] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="84fbc200218f9266f64e34e744c0b040ec537cef82f14060645fe5bf12fdb3e3" iface="eth0" netns="/var/run/netns/cni-4967829c-5ea6-5c5d-3a0a-b1eaa13f03f1" Apr 13 23:31:36.290749 containerd[1460]: 2026-04-13 23:31:33.956 [INFO][3611] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="84fbc200218f9266f64e34e744c0b040ec537cef82f14060645fe5bf12fdb3e3" iface="eth0" netns="/var/run/netns/cni-4967829c-5ea6-5c5d-3a0a-b1eaa13f03f1" Apr 13 23:31:36.290749 containerd[1460]: 2026-04-13 23:31:33.956 [INFO][3611] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="84fbc200218f9266f64e34e744c0b040ec537cef82f14060645fe5bf12fdb3e3" iface="eth0" netns="/var/run/netns/cni-4967829c-5ea6-5c5d-3a0a-b1eaa13f03f1" Apr 13 23:31:36.290749 containerd[1460]: 2026-04-13 23:31:33.956 [INFO][3611] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="84fbc200218f9266f64e34e744c0b040ec537cef82f14060645fe5bf12fdb3e3" Apr 13 23:31:36.290749 containerd[1460]: 2026-04-13 23:31:33.956 [INFO][3611] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="84fbc200218f9266f64e34e744c0b040ec537cef82f14060645fe5bf12fdb3e3" Apr 13 23:31:36.290749 containerd[1460]: 2026-04-13 23:31:33.976 [INFO][3636] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="84fbc200218f9266f64e34e744c0b040ec537cef82f14060645fe5bf12fdb3e3" HandleID="k8s-pod-network.84fbc200218f9266f64e34e744c0b040ec537cef82f14060645fe5bf12fdb3e3" Workload="10.0.0.139-k8s-calico--kube--controllers--776667d67--kwkmw-eth0" Apr 13 23:31:36.290749 containerd[1460]: 2026-04-13 23:31:33.976 [INFO][3636] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 23:31:36.290749 containerd[1460]: 2026-04-13 23:31:35.569 [INFO][3636] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 23:31:36.290749 containerd[1460]: 2026-04-13 23:31:36.012 [WARNING][3636] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="84fbc200218f9266f64e34e744c0b040ec537cef82f14060645fe5bf12fdb3e3" HandleID="k8s-pod-network.84fbc200218f9266f64e34e744c0b040ec537cef82f14060645fe5bf12fdb3e3" Workload="10.0.0.139-k8s-calico--kube--controllers--776667d67--kwkmw-eth0" Apr 13 23:31:36.290749 containerd[1460]: 2026-04-13 23:31:36.012 [INFO][3636] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="84fbc200218f9266f64e34e744c0b040ec537cef82f14060645fe5bf12fdb3e3" HandleID="k8s-pod-network.84fbc200218f9266f64e34e744c0b040ec537cef82f14060645fe5bf12fdb3e3" Workload="10.0.0.139-k8s-calico--kube--controllers--776667d67--kwkmw-eth0" Apr 13 23:31:36.290749 containerd[1460]: 2026-04-13 23:31:36.286 [INFO][3636] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 23:31:36.290749 containerd[1460]: 2026-04-13 23:31:36.289 [INFO][3611] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="84fbc200218f9266f64e34e744c0b040ec537cef82f14060645fe5bf12fdb3e3" Apr 13 23:31:36.291457 containerd[1460]: time="2026-04-13T23:31:36.290993198Z" level=info msg="TearDown network for sandbox \"84fbc200218f9266f64e34e744c0b040ec537cef82f14060645fe5bf12fdb3e3\" successfully" Apr 13 23:31:36.291457 containerd[1460]: time="2026-04-13T23:31:36.291287209Z" level=info msg="StopPodSandbox for \"84fbc200218f9266f64e34e744c0b040ec537cef82f14060645fe5bf12fdb3e3\" returns successfully" Apr 13 23:31:36.294277 systemd[1]: run-netns-cni\x2d4967829c\x2d5ea6\x2d5c5d\x2d3a0a\x2db1eaa13f03f1.mount: Deactivated successfully. Apr 13 23:31:36.299501 containerd[1460]: time="2026-04-13T23:31:36.299396836Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-776667d67-kwkmw,Uid:9ea06102-a332-4253-8eb9-bb324b77c2d2,Namespace:calico-system,Attempt:1,}" Apr 13 23:31:36.469349 containerd[1460]: 2026-04-13 23:31:33.255 [INFO][3566] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.139-k8s-coredns--66bc5c9577--4fkd7-eth0 coredns-66bc5c9577- kube-system 16b5b403-644e-4ee0-ad2f-a49ad057b5c5 1399 0 2026-04-13 23:28:15 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s 10.0.0.139 coredns-66bc5c9577-4fkd7 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali7cb6ec363e7 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="6352321dd11e09cc5bbbd91001b760f0cd254743587b60171ee0c890d8ae3712" Namespace="kube-system" Pod="coredns-66bc5c9577-4fkd7" WorkloadEndpoint="10.0.0.139-k8s-coredns--66bc5c9577--4fkd7-" Apr 13 23:31:36.469349 containerd[1460]: 2026-04-13 23:31:33.256 [INFO][3566] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6352321dd11e09cc5bbbd91001b760f0cd254743587b60171ee0c890d8ae3712" Namespace="kube-system" Pod="coredns-66bc5c9577-4fkd7" WorkloadEndpoint="10.0.0.139-k8s-coredns--66bc5c9577--4fkd7-eth0" Apr 13 23:31:36.469349 containerd[1460]: 2026-04-13 23:31:33.375 [INFO][3627] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6352321dd11e09cc5bbbd91001b760f0cd254743587b60171ee0c890d8ae3712" HandleID="k8s-pod-network.6352321dd11e09cc5bbbd91001b760f0cd254743587b60171ee0c890d8ae3712" Workload="10.0.0.139-k8s-coredns--66bc5c9577--4fkd7-eth0" Apr 13 23:31:36.469349 containerd[1460]: 2026-04-13 23:31:33.712 [INFO][3627] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="6352321dd11e09cc5bbbd91001b760f0cd254743587b60171ee0c890d8ae3712" HandleID="k8s-pod-network.6352321dd11e09cc5bbbd91001b760f0cd254743587b60171ee0c890d8ae3712" Workload="10.0.0.139-k8s-coredns--66bc5c9577--4fkd7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000135e60), Attrs:map[string]string{"namespace":"kube-system", "node":"10.0.0.139", "pod":"coredns-66bc5c9577-4fkd7", "timestamp":"2026-04-13 23:31:33.375895118 +0000 UTC"}, Hostname:"10.0.0.139", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000714000)} Apr 13 23:31:36.469349 containerd[1460]: 2026-04-13 23:31:33.712 [INFO][3627] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 23:31:36.469349 containerd[1460]: 2026-04-13 23:31:33.712 [INFO][3627] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 23:31:36.469349 containerd[1460]: 2026-04-13 23:31:33.712 [INFO][3627] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.139' Apr 13 23:31:36.469349 containerd[1460]: 2026-04-13 23:31:34.156 [INFO][3627] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.6352321dd11e09cc5bbbd91001b760f0cd254743587b60171ee0c890d8ae3712" host="10.0.0.139" Apr 13 23:31:36.469349 containerd[1460]: 2026-04-13 23:31:34.663 [INFO][3627] ipam/ipam.go 409: Looking up existing affinities for host host="10.0.0.139" Apr 13 23:31:36.469349 containerd[1460]: 2026-04-13 23:31:35.170 [INFO][3627] ipam/ipam.go 526: Trying affinity for 192.168.100.0/26 host="10.0.0.139" Apr 13 23:31:36.469349 containerd[1460]: 2026-04-13 23:31:35.279 [INFO][3627] ipam/ipam.go 160: Attempting to load block cidr=192.168.100.0/26 host="10.0.0.139" Apr 13 23:31:36.469349 containerd[1460]: 2026-04-13 23:31:35.331 [INFO][3627] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.100.0/26 host="10.0.0.139" Apr 13 23:31:36.469349 containerd[1460]: 2026-04-13 23:31:35.332 [INFO][3627] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.100.0/26 handle="k8s-pod-network.6352321dd11e09cc5bbbd91001b760f0cd254743587b60171ee0c890d8ae3712" host="10.0.0.139" Apr 13 23:31:36.469349 containerd[1460]: 2026-04-13 23:31:35.369 [INFO][3627] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.6352321dd11e09cc5bbbd91001b760f0cd254743587b60171ee0c890d8ae3712 Apr 13 23:31:36.469349 containerd[1460]: 2026-04-13 23:31:35.438 [INFO][3627] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.100.0/26 handle="k8s-pod-network.6352321dd11e09cc5bbbd91001b760f0cd254743587b60171ee0c890d8ae3712" host="10.0.0.139" Apr 13 23:31:36.469349 containerd[1460]: 2026-04-13 23:31:35.568 [INFO][3627] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.100.2/26] block=192.168.100.0/26 handle="k8s-pod-network.6352321dd11e09cc5bbbd91001b760f0cd254743587b60171ee0c890d8ae3712" host="10.0.0.139" Apr 13 23:31:36.469349 containerd[1460]: 2026-04-13 23:31:35.568 [INFO][3627] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.100.2/26] handle="k8s-pod-network.6352321dd11e09cc5bbbd91001b760f0cd254743587b60171ee0c890d8ae3712" host="10.0.0.139" Apr 13 23:31:36.469349 containerd[1460]: 2026-04-13 23:31:35.568 [INFO][3627] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 23:31:36.469349 containerd[1460]: 2026-04-13 23:31:35.569 [INFO][3627] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.100.2/26] IPv6=[] ContainerID="6352321dd11e09cc5bbbd91001b760f0cd254743587b60171ee0c890d8ae3712" HandleID="k8s-pod-network.6352321dd11e09cc5bbbd91001b760f0cd254743587b60171ee0c890d8ae3712" Workload="10.0.0.139-k8s-coredns--66bc5c9577--4fkd7-eth0" Apr 13 23:31:36.470314 containerd[1460]: 2026-04-13 23:31:35.571 [INFO][3566] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6352321dd11e09cc5bbbd91001b760f0cd254743587b60171ee0c890d8ae3712" Namespace="kube-system" Pod="coredns-66bc5c9577-4fkd7" WorkloadEndpoint="10.0.0.139-k8s-coredns--66bc5c9577--4fkd7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.139-k8s-coredns--66bc5c9577--4fkd7-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"16b5b403-644e-4ee0-ad2f-a49ad057b5c5", ResourceVersion:"1399", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 23, 28, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.139", ContainerID:"", Pod:"coredns-66bc5c9577-4fkd7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.100.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7cb6ec363e7", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 23:31:36.470314 containerd[1460]: 2026-04-13 23:31:35.571 [INFO][3566] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.100.2/32] ContainerID="6352321dd11e09cc5bbbd91001b760f0cd254743587b60171ee0c890d8ae3712" Namespace="kube-system" Pod="coredns-66bc5c9577-4fkd7" WorkloadEndpoint="10.0.0.139-k8s-coredns--66bc5c9577--4fkd7-eth0" Apr 13 23:31:36.470314 containerd[1460]: 2026-04-13 23:31:35.571 [INFO][3566] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7cb6ec363e7 ContainerID="6352321dd11e09cc5bbbd91001b760f0cd254743587b60171ee0c890d8ae3712" Namespace="kube-system" Pod="coredns-66bc5c9577-4fkd7" WorkloadEndpoint="10.0.0.139-k8s-coredns--66bc5c9577--4fkd7-eth0" Apr 13 23:31:36.470314 containerd[1460]: 2026-04-13 23:31:35.576 [INFO][3566] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6352321dd11e09cc5bbbd91001b760f0cd254743587b60171ee0c890d8ae3712" Namespace="kube-system" Pod="coredns-66bc5c9577-4fkd7" WorkloadEndpoint="10.0.0.139-k8s-coredns--66bc5c9577--4fkd7-eth0" Apr 13 23:31:36.470314 containerd[1460]: 2026-04-13 23:31:35.577 [INFO][3566] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6352321dd11e09cc5bbbd91001b760f0cd254743587b60171ee0c890d8ae3712" Namespace="kube-system" Pod="coredns-66bc5c9577-4fkd7" WorkloadEndpoint="10.0.0.139-k8s-coredns--66bc5c9577--4fkd7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.139-k8s-coredns--66bc5c9577--4fkd7-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"16b5b403-644e-4ee0-ad2f-a49ad057b5c5", ResourceVersion:"1399", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 23, 28, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.139", ContainerID:"6352321dd11e09cc5bbbd91001b760f0cd254743587b60171ee0c890d8ae3712", Pod:"coredns-66bc5c9577-4fkd7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.100.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7cb6ec363e7", MAC:"66:85:cf:65:20:08", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 23:31:36.470314 containerd[1460]: 2026-04-13 23:31:36.467 [INFO][3566] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6352321dd11e09cc5bbbd91001b760f0cd254743587b60171ee0c890d8ae3712" Namespace="kube-system" Pod="coredns-66bc5c9577-4fkd7" WorkloadEndpoint="10.0.0.139-k8s-coredns--66bc5c9577--4fkd7-eth0" Apr 13 23:31:36.489138 containerd[1460]: time="2026-04-13T23:31:36.488751142Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 23:31:36.489138 containerd[1460]: time="2026-04-13T23:31:36.489039679Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 23:31:36.489138 containerd[1460]: time="2026-04-13T23:31:36.489062271Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 23:31:36.489676 containerd[1460]: time="2026-04-13T23:31:36.489478992Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 23:31:36.515319 systemd[1]: Started cri-containerd-6352321dd11e09cc5bbbd91001b760f0cd254743587b60171ee0c890d8ae3712.scope - libcontainer container 6352321dd11e09cc5bbbd91001b760f0cd254743587b60171ee0c890d8ae3712. Apr 13 23:31:36.528875 systemd-resolved[1389]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 13 23:31:36.555238 containerd[1460]: time="2026-04-13T23:31:36.555045741Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-4fkd7,Uid:16b5b403-644e-4ee0-ad2f-a49ad057b5c5,Namespace:kube-system,Attempt:1,} returns sandbox id \"6352321dd11e09cc5bbbd91001b760f0cd254743587b60171ee0c890d8ae3712\"" Apr 13 23:31:36.556021 kubelet[1759]: E0413 23:31:36.555997 1759 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:31:36.556710 containerd[1460]: time="2026-04-13T23:31:36.556682541Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Apr 13 23:31:36.831989 systemd-networkd[1386]: cali7cb6ec363e7: Gained IPv6LL Apr 13 23:31:36.872957 containerd[1460]: 2026-04-13 23:31:34.336 [INFO][3610] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="85433dd274c9b8979fbc01f89358003fd1f330040c5457c5aa83cb6f050ffea3" Apr 13 23:31:36.872957 containerd[1460]: 2026-04-13 23:31:34.336 [INFO][3610] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="85433dd274c9b8979fbc01f89358003fd1f330040c5457c5aa83cb6f050ffea3" iface="eth0" netns="/var/run/netns/cni-bc1119cb-f316-fe8a-6ceb-3f0831a8c544" Apr 13 23:31:36.872957 containerd[1460]: 2026-04-13 23:31:34.336 [INFO][3610] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="85433dd274c9b8979fbc01f89358003fd1f330040c5457c5aa83cb6f050ffea3" iface="eth0" netns="/var/run/netns/cni-bc1119cb-f316-fe8a-6ceb-3f0831a8c544" Apr 13 23:31:36.872957 containerd[1460]: 2026-04-13 23:31:34.336 [INFO][3610] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="85433dd274c9b8979fbc01f89358003fd1f330040c5457c5aa83cb6f050ffea3" iface="eth0" netns="/var/run/netns/cni-bc1119cb-f316-fe8a-6ceb-3f0831a8c544" Apr 13 23:31:36.872957 containerd[1460]: 2026-04-13 23:31:34.336 [INFO][3610] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="85433dd274c9b8979fbc01f89358003fd1f330040c5457c5aa83cb6f050ffea3" Apr 13 23:31:36.872957 containerd[1460]: 2026-04-13 23:31:34.336 [INFO][3610] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="85433dd274c9b8979fbc01f89358003fd1f330040c5457c5aa83cb6f050ffea3" Apr 13 23:31:36.872957 containerd[1460]: 2026-04-13 23:31:34.358 [INFO][3652] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="85433dd274c9b8979fbc01f89358003fd1f330040c5457c5aa83cb6f050ffea3" HandleID="k8s-pod-network.85433dd274c9b8979fbc01f89358003fd1f330040c5457c5aa83cb6f050ffea3" Workload="10.0.0.139-k8s-csi--node--driver--r2cfg-eth0" Apr 13 23:31:36.872957 containerd[1460]: 2026-04-13 23:31:34.359 [INFO][3652] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 23:31:36.872957 containerd[1460]: 2026-04-13 23:31:36.287 [INFO][3652] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 23:31:36.872957 containerd[1460]: 2026-04-13 23:31:36.664 [WARNING][3652] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="85433dd274c9b8979fbc01f89358003fd1f330040c5457c5aa83cb6f050ffea3" HandleID="k8s-pod-network.85433dd274c9b8979fbc01f89358003fd1f330040c5457c5aa83cb6f050ffea3" Workload="10.0.0.139-k8s-csi--node--driver--r2cfg-eth0" Apr 13 23:31:36.872957 containerd[1460]: 2026-04-13 23:31:36.664 [INFO][3652] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="85433dd274c9b8979fbc01f89358003fd1f330040c5457c5aa83cb6f050ffea3" HandleID="k8s-pod-network.85433dd274c9b8979fbc01f89358003fd1f330040c5457c5aa83cb6f050ffea3" Workload="10.0.0.139-k8s-csi--node--driver--r2cfg-eth0" Apr 13 23:31:36.872957 containerd[1460]: 2026-04-13 23:31:36.870 [INFO][3652] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 23:31:36.872957 containerd[1460]: 2026-04-13 23:31:36.871 [INFO][3610] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="85433dd274c9b8979fbc01f89358003fd1f330040c5457c5aa83cb6f050ffea3" Apr 13 23:31:36.873488 containerd[1460]: time="2026-04-13T23:31:36.873164631Z" level=info msg="TearDown network for sandbox \"85433dd274c9b8979fbc01f89358003fd1f330040c5457c5aa83cb6f050ffea3\" successfully" Apr 13 23:31:36.873488 containerd[1460]: time="2026-04-13T23:31:36.873192213Z" level=info msg="StopPodSandbox for \"85433dd274c9b8979fbc01f89358003fd1f330040c5457c5aa83cb6f050ffea3\" returns successfully" Apr 13 23:31:36.876782 containerd[1460]: time="2026-04-13T23:31:36.876695898Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-r2cfg,Uid:db378709-9093-42e0-99e6-ba9beb70b60d,Namespace:calico-system,Attempt:1,}" Apr 13 23:31:37.126251 kubelet[1759]: E0413 23:31:37.125422 1759 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 13 23:31:37.160935 containerd[1460]: time="2026-04-13T23:31:37.160076006Z" level=info msg="StopPodSandbox for \"b562b04fab62d415ad48b97dd24b4848a7492a19c6b3ac8058cd58d403657594\"" Apr 13 23:31:37.164531 containerd[1460]: time="2026-04-13T23:31:37.164483120Z" level=info msg="StopPodSandbox for \"744fda8a20f0f42d30edb3c3763a11f9a741001bc21842ba19591f6b17a201ee\"" Apr 13 23:31:37.295328 systemd[1]: run-netns-cni\x2dbc1119cb\x2df316\x2dfe8a\x2d6ceb\x2d3f0831a8c544.mount: Deactivated successfully. Apr 13 23:31:37.770675 containerd[1460]: time="2026-04-13T23:31:37.770592371Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 23:31:37.771884 containerd[1460]: time="2026-04-13T23:31:37.771723361Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=22387483" Apr 13 23:31:37.772993 containerd[1460]: time="2026-04-13T23:31:37.772937140Z" level=info msg="ImageCreate event name:\"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 23:31:37.775432 containerd[1460]: time="2026-04-13T23:31:37.775383803Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 23:31:37.776363 containerd[1460]: time="2026-04-13T23:31:37.776341346Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"22384805\" in 1.219626986s" Apr 13 23:31:37.776409 containerd[1460]: time="2026-04-13T23:31:37.776367975Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\"" Apr 13 23:31:37.781109 containerd[1460]: time="2026-04-13T23:31:37.781055768Z" level=info msg="CreateContainer within sandbox \"6352321dd11e09cc5bbbd91001b760f0cd254743587b60171ee0c890d8ae3712\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 13 23:31:37.794395 containerd[1460]: time="2026-04-13T23:31:37.794323890Z" level=info msg="CreateContainer within sandbox \"6352321dd11e09cc5bbbd91001b760f0cd254743587b60171ee0c890d8ae3712\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c5656eeefd1897d1dec3c74ba6ce0d07f3cb3bc69ef8aaf6484ddea5cd95b36b\"" Apr 13 23:31:37.795031 containerd[1460]: time="2026-04-13T23:31:37.795000108Z" level=info msg="StartContainer for \"c5656eeefd1897d1dec3c74ba6ce0d07f3cb3bc69ef8aaf6484ddea5cd95b36b\"" Apr 13 23:31:37.884660 systemd[1]: Started cri-containerd-c5656eeefd1897d1dec3c74ba6ce0d07f3cb3bc69ef8aaf6484ddea5cd95b36b.scope - libcontainer container c5656eeefd1897d1dec3c74ba6ce0d07f3cb3bc69ef8aaf6484ddea5cd95b36b. Apr 13 23:31:37.914488 containerd[1460]: time="2026-04-13T23:31:37.914407942Z" level=info msg="StartContainer for \"c5656eeefd1897d1dec3c74ba6ce0d07f3cb3bc69ef8aaf6484ddea5cd95b36b\" returns successfully" Apr 13 23:31:38.125771 kubelet[1759]: E0413 23:31:38.125648 1759 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 13 23:31:38.674991 kubelet[1759]: E0413 23:31:38.674957 1759 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:31:39.126849 kubelet[1759]: E0413 23:31:39.126731 1759 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 13 23:31:39.563006 kubelet[1759]: I0413 23:31:39.562912 1759 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-4fkd7" podStartSLOduration=203.342243929 podStartE2EDuration="3m24.562893942s" podCreationTimestamp="2026-04-13 23:28:15 +0000 UTC" firstStartedPulling="2026-04-13 23:31:36.556440708 +0000 UTC m=+101.596711852" lastFinishedPulling="2026-04-13 23:31:37.777090722 +0000 UTC m=+102.817361865" observedRunningTime="2026-04-13 23:31:38.957563228 +0000 UTC m=+103.997834378" watchObservedRunningTime="2026-04-13 23:31:39.562893942 +0000 UTC m=+104.603165086" Apr 13 23:31:39.678515 kubelet[1759]: E0413 23:31:39.678456 1759 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:31:39.882448 systemd-networkd[1386]: cali1aa9eafa915: Link UP Apr 13 23:31:39.882624 systemd-networkd[1386]: cali1aa9eafa915: Gained carrier Apr 13 23:31:40.161715 kubelet[1759]: E0413 23:31:40.161564 1759 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 13 23:31:40.266838 containerd[1460]: 2026-04-13 23:31:33.413 [INFO][3580] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.139-k8s-calico--apiserver--654bb5c945--wpvsk-eth0 calico-apiserver-654bb5c945- calico-system 9d2e4322-5d1c-4d78-b98d-4acc3604c168 1400 0 2026-04-13 23:30:57 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:654bb5c945 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s 10.0.0.139 calico-apiserver-654bb5c945-wpvsk eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali1aa9eafa915 [] [] }} ContainerID="1d25b235ecc8c69e454cd10e5a29d18583b6821027413f2409d93221d5aadb77" Namespace="calico-system" Pod="calico-apiserver-654bb5c945-wpvsk" WorkloadEndpoint="10.0.0.139-k8s-calico--apiserver--654bb5c945--wpvsk-" Apr 13 23:31:40.266838 containerd[1460]: 2026-04-13 23:31:33.413 [INFO][3580] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1d25b235ecc8c69e454cd10e5a29d18583b6821027413f2409d93221d5aadb77" Namespace="calico-system" Pod="calico-apiserver-654bb5c945-wpvsk" WorkloadEndpoint="10.0.0.139-k8s-calico--apiserver--654bb5c945--wpvsk-eth0" Apr 13 23:31:40.266838 containerd[1460]: 2026-04-13 23:31:34.157 [INFO][3644] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1d25b235ecc8c69e454cd10e5a29d18583b6821027413f2409d93221d5aadb77" HandleID="k8s-pod-network.1d25b235ecc8c69e454cd10e5a29d18583b6821027413f2409d93221d5aadb77" Workload="10.0.0.139-k8s-calico--apiserver--654bb5c945--wpvsk-eth0" Apr 13 23:31:40.266838 containerd[1460]: 2026-04-13 23:31:34.517 [INFO][3644] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="1d25b235ecc8c69e454cd10e5a29d18583b6821027413f2409d93221d5aadb77" HandleID="k8s-pod-network.1d25b235ecc8c69e454cd10e5a29d18583b6821027413f2409d93221d5aadb77" Workload="10.0.0.139-k8s-calico--apiserver--654bb5c945--wpvsk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001b0680), Attrs:map[string]string{"namespace":"calico-system", "node":"10.0.0.139", "pod":"calico-apiserver-654bb5c945-wpvsk", "timestamp":"2026-04-13 23:31:34.157310256 +0000 UTC"}, Hostname:"10.0.0.139", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc00017b600)} Apr 13 23:31:40.266838 containerd[1460]: 2026-04-13 23:31:34.517 [INFO][3644] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 23:31:40.266838 containerd[1460]: 2026-04-13 23:31:36.870 [INFO][3644] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 23:31:40.266838 containerd[1460]: 2026-04-13 23:31:36.870 [INFO][3644] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.139' Apr 13 23:31:40.266838 containerd[1460]: 2026-04-13 23:31:37.338 [INFO][3644] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.1d25b235ecc8c69e454cd10e5a29d18583b6821027413f2409d93221d5aadb77" host="10.0.0.139" Apr 13 23:31:40.266838 containerd[1460]: 2026-04-13 23:31:37.653 [INFO][3644] ipam/ipam.go 409: Looking up existing affinities for host host="10.0.0.139" Apr 13 23:31:40.266838 containerd[1460]: 2026-04-13 23:31:38.448 [INFO][3644] ipam/ipam.go 526: Trying affinity for 192.168.100.0/26 host="10.0.0.139" Apr 13 23:31:40.266838 containerd[1460]: 2026-04-13 23:31:38.549 [INFO][3644] ipam/ipam.go 160: Attempting to load block cidr=192.168.100.0/26 host="10.0.0.139" Apr 13 23:31:40.266838 containerd[1460]: 2026-04-13 23:31:38.785 [INFO][3644] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.100.0/26 host="10.0.0.139" Apr 13 23:31:40.266838 containerd[1460]: 2026-04-13 23:31:38.785 [INFO][3644] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.100.0/26 handle="k8s-pod-network.1d25b235ecc8c69e454cd10e5a29d18583b6821027413f2409d93221d5aadb77" host="10.0.0.139" Apr 13 23:31:40.266838 containerd[1460]: 2026-04-13 23:31:38.908 [INFO][3644] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.1d25b235ecc8c69e454cd10e5a29d18583b6821027413f2409d93221d5aadb77 Apr 13 23:31:40.266838 containerd[1460]: 2026-04-13 23:31:39.407 [INFO][3644] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.100.0/26 handle="k8s-pod-network.1d25b235ecc8c69e454cd10e5a29d18583b6821027413f2409d93221d5aadb77" host="10.0.0.139" Apr 13 23:31:40.266838 containerd[1460]: 2026-04-13 23:31:39.876 [INFO][3644] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.100.3/26] block=192.168.100.0/26 handle="k8s-pod-network.1d25b235ecc8c69e454cd10e5a29d18583b6821027413f2409d93221d5aadb77" host="10.0.0.139" Apr 13 23:31:40.266838 containerd[1460]: 2026-04-13 23:31:39.876 [INFO][3644] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.100.3/26] handle="k8s-pod-network.1d25b235ecc8c69e454cd10e5a29d18583b6821027413f2409d93221d5aadb77" host="10.0.0.139" Apr 13 23:31:40.266838 containerd[1460]: 2026-04-13 23:31:39.876 [INFO][3644] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 23:31:40.266838 containerd[1460]: 2026-04-13 23:31:39.876 [INFO][3644] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.100.3/26] IPv6=[] ContainerID="1d25b235ecc8c69e454cd10e5a29d18583b6821027413f2409d93221d5aadb77" HandleID="k8s-pod-network.1d25b235ecc8c69e454cd10e5a29d18583b6821027413f2409d93221d5aadb77" Workload="10.0.0.139-k8s-calico--apiserver--654bb5c945--wpvsk-eth0" Apr 13 23:31:40.268069 containerd[1460]: 2026-04-13 23:31:39.879 [INFO][3580] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1d25b235ecc8c69e454cd10e5a29d18583b6821027413f2409d93221d5aadb77" Namespace="calico-system" Pod="calico-apiserver-654bb5c945-wpvsk" WorkloadEndpoint="10.0.0.139-k8s-calico--apiserver--654bb5c945--wpvsk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.139-k8s-calico--apiserver--654bb5c945--wpvsk-eth0", GenerateName:"calico-apiserver-654bb5c945-", Namespace:"calico-system", SelfLink:"", UID:"9d2e4322-5d1c-4d78-b98d-4acc3604c168", ResourceVersion:"1400", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 23, 30, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"654bb5c945", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.139", ContainerID:"", Pod:"calico-apiserver-654bb5c945-wpvsk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.100.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali1aa9eafa915", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 23:31:40.268069 containerd[1460]: 2026-04-13 23:31:39.880 [INFO][3580] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.100.3/32] ContainerID="1d25b235ecc8c69e454cd10e5a29d18583b6821027413f2409d93221d5aadb77" Namespace="calico-system" Pod="calico-apiserver-654bb5c945-wpvsk" WorkloadEndpoint="10.0.0.139-k8s-calico--apiserver--654bb5c945--wpvsk-eth0" Apr 13 23:31:40.268069 containerd[1460]: 2026-04-13 23:31:39.880 [INFO][3580] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1aa9eafa915 ContainerID="1d25b235ecc8c69e454cd10e5a29d18583b6821027413f2409d93221d5aadb77" Namespace="calico-system" Pod="calico-apiserver-654bb5c945-wpvsk" WorkloadEndpoint="10.0.0.139-k8s-calico--apiserver--654bb5c945--wpvsk-eth0" Apr 13 23:31:40.268069 containerd[1460]: 2026-04-13 23:31:39.882 [INFO][3580] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1d25b235ecc8c69e454cd10e5a29d18583b6821027413f2409d93221d5aadb77" Namespace="calico-system" Pod="calico-apiserver-654bb5c945-wpvsk" WorkloadEndpoint="10.0.0.139-k8s-calico--apiserver--654bb5c945--wpvsk-eth0" Apr 13 23:31:40.268069 containerd[1460]: 2026-04-13 23:31:39.884 [INFO][3580] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1d25b235ecc8c69e454cd10e5a29d18583b6821027413f2409d93221d5aadb77" Namespace="calico-system" Pod="calico-apiserver-654bb5c945-wpvsk" WorkloadEndpoint="10.0.0.139-k8s-calico--apiserver--654bb5c945--wpvsk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.139-k8s-calico--apiserver--654bb5c945--wpvsk-eth0", GenerateName:"calico-apiserver-654bb5c945-", Namespace:"calico-system", SelfLink:"", UID:"9d2e4322-5d1c-4d78-b98d-4acc3604c168", ResourceVersion:"1400", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 23, 30, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"654bb5c945", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.139", ContainerID:"1d25b235ecc8c69e454cd10e5a29d18583b6821027413f2409d93221d5aadb77", Pod:"calico-apiserver-654bb5c945-wpvsk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.100.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali1aa9eafa915", MAC:"1a:5e:7b:bb:83:01", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 23:31:40.268069 containerd[1460]: 2026-04-13 23:31:40.265 [INFO][3580] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1d25b235ecc8c69e454cd10e5a29d18583b6821027413f2409d93221d5aadb77" Namespace="calico-system" Pod="calico-apiserver-654bb5c945-wpvsk" WorkloadEndpoint="10.0.0.139-k8s-calico--apiserver--654bb5c945--wpvsk-eth0" Apr 13 23:31:40.288887 containerd[1460]: 2026-04-13 23:31:37.548 [INFO][3690] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="013acc9c1696c4ebea5ba2b0ae5d3b80fef2a92f86fde53a123fb5fe27a1ebcf" Apr 13 23:31:40.288887 containerd[1460]: 2026-04-13 23:31:37.548 [INFO][3690] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="013acc9c1696c4ebea5ba2b0ae5d3b80fef2a92f86fde53a123fb5fe27a1ebcf" iface="eth0" netns="/var/run/netns/cni-c54f4505-02b1-874f-c06f-eec15165bee3" Apr 13 23:31:40.288887 containerd[1460]: 2026-04-13 23:31:37.548 [INFO][3690] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="013acc9c1696c4ebea5ba2b0ae5d3b80fef2a92f86fde53a123fb5fe27a1ebcf" iface="eth0" netns="/var/run/netns/cni-c54f4505-02b1-874f-c06f-eec15165bee3" Apr 13 23:31:40.288887 containerd[1460]: 2026-04-13 23:31:37.549 [INFO][3690] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="013acc9c1696c4ebea5ba2b0ae5d3b80fef2a92f86fde53a123fb5fe27a1ebcf" iface="eth0" netns="/var/run/netns/cni-c54f4505-02b1-874f-c06f-eec15165bee3" Apr 13 23:31:40.288887 containerd[1460]: 2026-04-13 23:31:37.549 [INFO][3690] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="013acc9c1696c4ebea5ba2b0ae5d3b80fef2a92f86fde53a123fb5fe27a1ebcf" Apr 13 23:31:40.288887 containerd[1460]: 2026-04-13 23:31:37.549 [INFO][3690] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="013acc9c1696c4ebea5ba2b0ae5d3b80fef2a92f86fde53a123fb5fe27a1ebcf" Apr 13 23:31:40.288887 containerd[1460]: 2026-04-13 23:31:37.572 [INFO][3879] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="013acc9c1696c4ebea5ba2b0ae5d3b80fef2a92f86fde53a123fb5fe27a1ebcf" HandleID="k8s-pod-network.013acc9c1696c4ebea5ba2b0ae5d3b80fef2a92f86fde53a123fb5fe27a1ebcf" Workload="10.0.0.139-k8s-goldmane--cccfbd5cf--6h5xs-eth0" Apr 13 23:31:40.288887 containerd[1460]: 2026-04-13 23:31:37.572 [INFO][3879] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 23:31:40.288887 containerd[1460]: 2026-04-13 23:31:39.878 [INFO][3879] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 23:31:40.288887 containerd[1460]: 2026-04-13 23:31:40.095 [WARNING][3879] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="013acc9c1696c4ebea5ba2b0ae5d3b80fef2a92f86fde53a123fb5fe27a1ebcf" HandleID="k8s-pod-network.013acc9c1696c4ebea5ba2b0ae5d3b80fef2a92f86fde53a123fb5fe27a1ebcf" Workload="10.0.0.139-k8s-goldmane--cccfbd5cf--6h5xs-eth0" Apr 13 23:31:40.288887 containerd[1460]: 2026-04-13 23:31:40.095 [INFO][3879] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="013acc9c1696c4ebea5ba2b0ae5d3b80fef2a92f86fde53a123fb5fe27a1ebcf" HandleID="k8s-pod-network.013acc9c1696c4ebea5ba2b0ae5d3b80fef2a92f86fde53a123fb5fe27a1ebcf" Workload="10.0.0.139-k8s-goldmane--cccfbd5cf--6h5xs-eth0" Apr 13 23:31:40.288887 containerd[1460]: 2026-04-13 23:31:40.284 [INFO][3879] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 23:31:40.288887 containerd[1460]: 2026-04-13 23:31:40.286 [INFO][3690] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="013acc9c1696c4ebea5ba2b0ae5d3b80fef2a92f86fde53a123fb5fe27a1ebcf" Apr 13 23:31:40.292278 systemd[1]: run-netns-cni\x2dc54f4505\x2d02b1\x2d874f\x2dc06f\x2deec15165bee3.mount: Deactivated successfully. Apr 13 23:31:40.294261 containerd[1460]: time="2026-04-13T23:31:40.294059843Z" level=info msg="TearDown network for sandbox \"013acc9c1696c4ebea5ba2b0ae5d3b80fef2a92f86fde53a123fb5fe27a1ebcf\" successfully" Apr 13 23:31:40.294261 containerd[1460]: time="2026-04-13T23:31:40.294093061Z" level=info msg="StopPodSandbox for \"013acc9c1696c4ebea5ba2b0ae5d3b80fef2a92f86fde53a123fb5fe27a1ebcf\" returns successfully" Apr 13 23:31:40.297740 containerd[1460]: time="2026-04-13T23:31:40.297354432Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 23:31:40.297740 containerd[1460]: time="2026-04-13T23:31:40.297420000Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 23:31:40.297740 containerd[1460]: time="2026-04-13T23:31:40.297439266Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 23:31:40.297740 containerd[1460]: time="2026-04-13T23:31:40.297523840Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 23:31:40.306628 containerd[1460]: time="2026-04-13T23:31:40.301459209Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-cccfbd5cf-6h5xs,Uid:6e08b664-aec1-4d1f-aaf5-d1070a1bd37d,Namespace:calico-system,Attempt:1,}" Apr 13 23:31:40.326080 systemd[1]: Started cri-containerd-1d25b235ecc8c69e454cd10e5a29d18583b6821027413f2409d93221d5aadb77.scope - libcontainer container 1d25b235ecc8c69e454cd10e5a29d18583b6821027413f2409d93221d5aadb77. Apr 13 23:31:40.367297 systemd-resolved[1389]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 13 23:31:40.433726 containerd[1460]: time="2026-04-13T23:31:40.433549986Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-654bb5c945-wpvsk,Uid:9d2e4322-5d1c-4d78-b98d-4acc3604c168,Namespace:calico-system,Attempt:1,} returns sandbox id \"1d25b235ecc8c69e454cd10e5a29d18583b6821027413f2409d93221d5aadb77\"" Apr 13 23:31:40.436288 containerd[1460]: time="2026-04-13T23:31:40.436089936Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Apr 13 23:31:40.683559 kubelet[1759]: E0413 23:31:40.683388 1759 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:31:41.163013 kubelet[1759]: E0413 23:31:41.162869 1759 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 13 23:31:41.632351 systemd-networkd[1386]: cali1aa9eafa915: Gained IPv6LL Apr 13 23:31:42.170480 kubelet[1759]: E0413 23:31:42.170404 1759 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 13 23:31:43.171483 kubelet[1759]: E0413 23:31:43.171403 1759 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 13 23:31:43.405235 systemd-networkd[1386]: cali3012d634e5a: Link UP Apr 13 23:31:43.406006 systemd-networkd[1386]: cali3012d634e5a: Gained carrier Apr 13 23:31:43.937750 containerd[1460]: time="2026-04-13T23:31:43.937671093Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 23:31:43.938500 containerd[1460]: time="2026-04-13T23:31:43.938466591Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=48415780" Apr 13 23:31:43.939886 containerd[1460]: time="2026-04-13T23:31:43.939830297Z" level=info msg="ImageCreate event name:\"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 23:31:43.942919 containerd[1460]: time="2026-04-13T23:31:43.942850661Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 23:31:43.943690 containerd[1460]: time="2026-04-13T23:31:43.943623356Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 3.507468661s" Apr 13 23:31:43.943690 containerd[1460]: time="2026-04-13T23:31:43.943666212Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Apr 13 23:31:43.948986 containerd[1460]: time="2026-04-13T23:31:43.948906600Z" level=info msg="CreateContainer within sandbox \"1d25b235ecc8c69e454cd10e5a29d18583b6821027413f2409d93221d5aadb77\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 13 23:31:43.967368 containerd[1460]: time="2026-04-13T23:31:43.967272313Z" level=info msg="CreateContainer within sandbox \"1d25b235ecc8c69e454cd10e5a29d18583b6821027413f2409d93221d5aadb77\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"c59b44d9b5de80dfd1b0c4af2ab8dfae8c9e22e7e64e4c43023fa268054cbde7\"" Apr 13 23:31:43.968198 containerd[1460]: time="2026-04-13T23:31:43.968135447Z" level=info msg="StartContainer for \"c59b44d9b5de80dfd1b0c4af2ab8dfae8c9e22e7e64e4c43023fa268054cbde7\"" Apr 13 23:31:44.013148 systemd[1]: Started cri-containerd-c59b44d9b5de80dfd1b0c4af2ab8dfae8c9e22e7e64e4c43023fa268054cbde7.scope - libcontainer container c59b44d9b5de80dfd1b0c4af2ab8dfae8c9e22e7e64e4c43023fa268054cbde7. Apr 13 23:31:44.056186 containerd[1460]: time="2026-04-13T23:31:44.055987412Z" level=info msg="StartContainer for \"c59b44d9b5de80dfd1b0c4af2ab8dfae8c9e22e7e64e4c43023fa268054cbde7\" returns successfully" Apr 13 23:31:44.182445 kubelet[1759]: E0413 23:31:44.182385 1759 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 13 23:31:44.617916 containerd[1460]: 2026-04-13 23:31:38.336 [INFO][3825] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="744fda8a20f0f42d30edb3c3763a11f9a741001bc21842ba19591f6b17a201ee" Apr 13 23:31:44.617916 containerd[1460]: 2026-04-13 23:31:38.336 [INFO][3825] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="744fda8a20f0f42d30edb3c3763a11f9a741001bc21842ba19591f6b17a201ee" iface="eth0" netns="/var/run/netns/cni-2ec9f095-9092-f64f-6d13-e9b52dcd2187" Apr 13 23:31:44.617916 containerd[1460]: 2026-04-13 23:31:38.337 [INFO][3825] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="744fda8a20f0f42d30edb3c3763a11f9a741001bc21842ba19591f6b17a201ee" iface="eth0" netns="/var/run/netns/cni-2ec9f095-9092-f64f-6d13-e9b52dcd2187" Apr 13 23:31:44.617916 containerd[1460]: 2026-04-13 23:31:38.337 [INFO][3825] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="744fda8a20f0f42d30edb3c3763a11f9a741001bc21842ba19591f6b17a201ee" iface="eth0" netns="/var/run/netns/cni-2ec9f095-9092-f64f-6d13-e9b52dcd2187" Apr 13 23:31:44.617916 containerd[1460]: 2026-04-13 23:31:38.337 [INFO][3825] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="744fda8a20f0f42d30edb3c3763a11f9a741001bc21842ba19591f6b17a201ee" Apr 13 23:31:44.617916 containerd[1460]: 2026-04-13 23:31:38.337 [INFO][3825] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="744fda8a20f0f42d30edb3c3763a11f9a741001bc21842ba19591f6b17a201ee" Apr 13 23:31:44.617916 containerd[1460]: 2026-04-13 23:31:38.361 [INFO][3943] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="744fda8a20f0f42d30edb3c3763a11f9a741001bc21842ba19591f6b17a201ee" HandleID="k8s-pod-network.744fda8a20f0f42d30edb3c3763a11f9a741001bc21842ba19591f6b17a201ee" Workload="10.0.0.139-k8s-coredns--66bc5c9577--bwj28-eth0" Apr 13 23:31:44.617916 containerd[1460]: 2026-04-13 23:31:38.361 [INFO][3943] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 23:31:44.617916 containerd[1460]: 2026-04-13 23:31:43.399 [INFO][3943] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 23:31:44.617916 containerd[1460]: 2026-04-13 23:31:44.124 [WARNING][3943] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="744fda8a20f0f42d30edb3c3763a11f9a741001bc21842ba19591f6b17a201ee" HandleID="k8s-pod-network.744fda8a20f0f42d30edb3c3763a11f9a741001bc21842ba19591f6b17a201ee" Workload="10.0.0.139-k8s-coredns--66bc5c9577--bwj28-eth0" Apr 13 23:31:44.617916 containerd[1460]: 2026-04-13 23:31:44.124 [INFO][3943] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="744fda8a20f0f42d30edb3c3763a11f9a741001bc21842ba19591f6b17a201ee" HandleID="k8s-pod-network.744fda8a20f0f42d30edb3c3763a11f9a741001bc21842ba19591f6b17a201ee" Workload="10.0.0.139-k8s-coredns--66bc5c9577--bwj28-eth0" Apr 13 23:31:44.617916 containerd[1460]: 2026-04-13 23:31:44.614 [INFO][3943] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 23:31:44.617916 containerd[1460]: 2026-04-13 23:31:44.616 [INFO][3825] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="744fda8a20f0f42d30edb3c3763a11f9a741001bc21842ba19591f6b17a201ee" Apr 13 23:31:44.618323 containerd[1460]: time="2026-04-13T23:31:44.618207463Z" level=info msg="TearDown network for sandbox \"744fda8a20f0f42d30edb3c3763a11f9a741001bc21842ba19591f6b17a201ee\" successfully" Apr 13 23:31:44.618323 containerd[1460]: time="2026-04-13T23:31:44.618312375Z" level=info msg="StopPodSandbox for \"744fda8a20f0f42d30edb3c3763a11f9a741001bc21842ba19591f6b17a201ee\" returns successfully" Apr 13 23:31:44.623461 kubelet[1759]: E0413 23:31:44.623400 1759 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:31:44.623485 systemd[1]: run-netns-cni\x2d2ec9f095\x2d9092\x2df64f\x2d6d13\x2de9b52dcd2187.mount: Deactivated successfully. Apr 13 23:31:44.623991 containerd[1460]: time="2026-04-13T23:31:44.623949685Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-bwj28,Uid:30fde92e-28ad-486a-acd9-ae88e3ee265a,Namespace:kube-system,Attempt:1,}" Apr 13 23:31:44.871593 containerd[1460]: 2026-04-13 23:31:37.260 [INFO][3699] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.139-k8s-calico--kube--controllers--776667d67--kwkmw-eth0 calico-kube-controllers-776667d67- calico-system 9ea06102-a332-4253-8eb9-bb324b77c2d2 1410 0 2026-04-13 23:31:01 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:776667d67 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s 10.0.0.139 calico-kube-controllers-776667d67-kwkmw eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali3012d634e5a [] [] }} ContainerID="16e9de07e058a3d848e8bfc15f6c961e97eefd4802cee5a648422a00b5ee4f1f" Namespace="calico-system" Pod="calico-kube-controllers-776667d67-kwkmw" WorkloadEndpoint="10.0.0.139-k8s-calico--kube--controllers--776667d67--kwkmw-" Apr 13 23:31:44.871593 containerd[1460]: 2026-04-13 23:31:37.260 [INFO][3699] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="16e9de07e058a3d848e8bfc15f6c961e97eefd4802cee5a648422a00b5ee4f1f" Namespace="calico-system" Pod="calico-kube-controllers-776667d67-kwkmw" WorkloadEndpoint="10.0.0.139-k8s-calico--kube--controllers--776667d67--kwkmw-eth0" Apr 13 23:31:44.871593 containerd[1460]: 2026-04-13 23:31:37.704 [INFO][3887] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="16e9de07e058a3d848e8bfc15f6c961e97eefd4802cee5a648422a00b5ee4f1f" HandleID="k8s-pod-network.16e9de07e058a3d848e8bfc15f6c961e97eefd4802cee5a648422a00b5ee4f1f" Workload="10.0.0.139-k8s-calico--kube--controllers--776667d67--kwkmw-eth0" Apr 13 23:31:44.871593 containerd[1460]: 2026-04-13 23:31:37.905 [INFO][3887] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="16e9de07e058a3d848e8bfc15f6c961e97eefd4802cee5a648422a00b5ee4f1f" HandleID="k8s-pod-network.16e9de07e058a3d848e8bfc15f6c961e97eefd4802cee5a648422a00b5ee4f1f" Workload="10.0.0.139-k8s-calico--kube--controllers--776667d67--kwkmw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004ecc0), Attrs:map[string]string{"namespace":"calico-system", "node":"10.0.0.139", "pod":"calico-kube-controllers-776667d67-kwkmw", "timestamp":"2026-04-13 23:31:37.704261911 +0000 UTC"}, Hostname:"10.0.0.139", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0001f0f20)} Apr 13 23:31:44.871593 containerd[1460]: 2026-04-13 23:31:37.905 [INFO][3887] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 23:31:44.871593 containerd[1460]: 2026-04-13 23:31:40.284 [INFO][3887] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 23:31:44.871593 containerd[1460]: 2026-04-13 23:31:40.285 [INFO][3887] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.139' Apr 13 23:31:44.871593 containerd[1460]: 2026-04-13 23:31:40.599 [INFO][3887] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.16e9de07e058a3d848e8bfc15f6c961e97eefd4802cee5a648422a00b5ee4f1f" host="10.0.0.139" Apr 13 23:31:44.871593 containerd[1460]: 2026-04-13 23:31:41.377 [INFO][3887] ipam/ipam.go 409: Looking up existing affinities for host host="10.0.0.139" Apr 13 23:31:44.871593 containerd[1460]: 2026-04-13 23:31:41.857 [INFO][3887] ipam/ipam.go 526: Trying affinity for 192.168.100.0/26 host="10.0.0.139" Apr 13 23:31:44.871593 containerd[1460]: 2026-04-13 23:31:42.335 [INFO][3887] ipam/ipam.go 160: Attempting to load block cidr=192.168.100.0/26 host="10.0.0.139" Apr 13 23:31:44.871593 containerd[1460]: 2026-04-13 23:31:42.485 [INFO][3887] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.100.0/26 host="10.0.0.139" Apr 13 23:31:44.871593 containerd[1460]: 2026-04-13 23:31:42.485 [INFO][3887] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.100.0/26 handle="k8s-pod-network.16e9de07e058a3d848e8bfc15f6c961e97eefd4802cee5a648422a00b5ee4f1f" host="10.0.0.139" Apr 13 23:31:44.871593 containerd[1460]: 2026-04-13 23:31:42.682 [INFO][3887] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.16e9de07e058a3d848e8bfc15f6c961e97eefd4802cee5a648422a00b5ee4f1f Apr 13 23:31:44.871593 containerd[1460]: 2026-04-13 23:31:42.861 [INFO][3887] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.100.0/26 handle="k8s-pod-network.16e9de07e058a3d848e8bfc15f6c961e97eefd4802cee5a648422a00b5ee4f1f" host="10.0.0.139" Apr 13 23:31:44.871593 containerd[1460]: 2026-04-13 23:31:43.398 [INFO][3887] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.100.4/26] block=192.168.100.0/26 handle="k8s-pod-network.16e9de07e058a3d848e8bfc15f6c961e97eefd4802cee5a648422a00b5ee4f1f" host="10.0.0.139" Apr 13 23:31:44.871593 containerd[1460]: 2026-04-13 23:31:43.398 [INFO][3887] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.100.4/26] handle="k8s-pod-network.16e9de07e058a3d848e8bfc15f6c961e97eefd4802cee5a648422a00b5ee4f1f" host="10.0.0.139" Apr 13 23:31:44.871593 containerd[1460]: 2026-04-13 23:31:43.399 [INFO][3887] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 23:31:44.871593 containerd[1460]: 2026-04-13 23:31:43.399 [INFO][3887] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.100.4/26] IPv6=[] ContainerID="16e9de07e058a3d848e8bfc15f6c961e97eefd4802cee5a648422a00b5ee4f1f" HandleID="k8s-pod-network.16e9de07e058a3d848e8bfc15f6c961e97eefd4802cee5a648422a00b5ee4f1f" Workload="10.0.0.139-k8s-calico--kube--controllers--776667d67--kwkmw-eth0" Apr 13 23:31:44.873905 containerd[1460]: 2026-04-13 23:31:43.401 [INFO][3699] cni-plugin/k8s.go 418: Populated endpoint ContainerID="16e9de07e058a3d848e8bfc15f6c961e97eefd4802cee5a648422a00b5ee4f1f" Namespace="calico-system" Pod="calico-kube-controllers-776667d67-kwkmw" WorkloadEndpoint="10.0.0.139-k8s-calico--kube--controllers--776667d67--kwkmw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.139-k8s-calico--kube--controllers--776667d67--kwkmw-eth0", GenerateName:"calico-kube-controllers-776667d67-", Namespace:"calico-system", SelfLink:"", UID:"9ea06102-a332-4253-8eb9-bb324b77c2d2", ResourceVersion:"1410", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 23, 31, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"776667d67", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.139", ContainerID:"", Pod:"calico-kube-controllers-776667d67-kwkmw", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.100.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali3012d634e5a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 23:31:44.873905 containerd[1460]: 2026-04-13 23:31:43.401 [INFO][3699] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.100.4/32] ContainerID="16e9de07e058a3d848e8bfc15f6c961e97eefd4802cee5a648422a00b5ee4f1f" Namespace="calico-system" Pod="calico-kube-controllers-776667d67-kwkmw" WorkloadEndpoint="10.0.0.139-k8s-calico--kube--controllers--776667d67--kwkmw-eth0" Apr 13 23:31:44.873905 containerd[1460]: 2026-04-13 23:31:43.401 [INFO][3699] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3012d634e5a ContainerID="16e9de07e058a3d848e8bfc15f6c961e97eefd4802cee5a648422a00b5ee4f1f" Namespace="calico-system" Pod="calico-kube-controllers-776667d67-kwkmw" WorkloadEndpoint="10.0.0.139-k8s-calico--kube--controllers--776667d67--kwkmw-eth0" Apr 13 23:31:44.873905 containerd[1460]: 2026-04-13 23:31:43.405 [INFO][3699] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="16e9de07e058a3d848e8bfc15f6c961e97eefd4802cee5a648422a00b5ee4f1f" Namespace="calico-system" Pod="calico-kube-controllers-776667d67-kwkmw" WorkloadEndpoint="10.0.0.139-k8s-calico--kube--controllers--776667d67--kwkmw-eth0" Apr 13 23:31:44.873905 containerd[1460]: 2026-04-13 23:31:43.408 [INFO][3699] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="16e9de07e058a3d848e8bfc15f6c961e97eefd4802cee5a648422a00b5ee4f1f" Namespace="calico-system" Pod="calico-kube-controllers-776667d67-kwkmw" WorkloadEndpoint="10.0.0.139-k8s-calico--kube--controllers--776667d67--kwkmw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.139-k8s-calico--kube--controllers--776667d67--kwkmw-eth0", GenerateName:"calico-kube-controllers-776667d67-", Namespace:"calico-system", SelfLink:"", UID:"9ea06102-a332-4253-8eb9-bb324b77c2d2", ResourceVersion:"1410", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 23, 31, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"776667d67", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.139", ContainerID:"16e9de07e058a3d848e8bfc15f6c961e97eefd4802cee5a648422a00b5ee4f1f", Pod:"calico-kube-controllers-776667d67-kwkmw", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.100.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali3012d634e5a", MAC:"72:55:40:bc:47:65", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 23:31:44.873905 containerd[1460]: 2026-04-13 23:31:44.869 [INFO][3699] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="16e9de07e058a3d848e8bfc15f6c961e97eefd4802cee5a648422a00b5ee4f1f" Namespace="calico-system" Pod="calico-kube-controllers-776667d67-kwkmw" WorkloadEndpoint="10.0.0.139-k8s-calico--kube--controllers--776667d67--kwkmw-eth0" Apr 13 23:31:44.900634 containerd[1460]: time="2026-04-13T23:31:44.900295644Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 23:31:44.900634 containerd[1460]: time="2026-04-13T23:31:44.900526845Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 23:31:44.900634 containerd[1460]: time="2026-04-13T23:31:44.900547615Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 23:31:44.901030 containerd[1460]: time="2026-04-13T23:31:44.900845510Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 23:31:44.933411 systemd[1]: Started cri-containerd-16e9de07e058a3d848e8bfc15f6c961e97eefd4802cee5a648422a00b5ee4f1f.scope - libcontainer container 16e9de07e058a3d848e8bfc15f6c961e97eefd4802cee5a648422a00b5ee4f1f. Apr 13 23:31:44.946572 systemd-resolved[1389]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 13 23:31:44.987669 containerd[1460]: time="2026-04-13T23:31:44.987426916Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-776667d67-kwkmw,Uid:9ea06102-a332-4253-8eb9-bb324b77c2d2,Namespace:calico-system,Attempt:1,} returns sandbox id \"16e9de07e058a3d848e8bfc15f6c961e97eefd4802cee5a648422a00b5ee4f1f\"" Apr 13 23:31:44.989704 containerd[1460]: time="2026-04-13T23:31:44.989670772Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\"" Apr 13 23:31:45.183720 kubelet[1759]: E0413 23:31:45.183404 1759 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 13 23:31:45.282113 systemd-networkd[1386]: cali3012d634e5a: Gained IPv6LL Apr 13 23:31:45.835489 containerd[1460]: 2026-04-13 23:31:38.365 [INFO][3814] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="b562b04fab62d415ad48b97dd24b4848a7492a19c6b3ac8058cd58d403657594" Apr 13 23:31:45.835489 containerd[1460]: 2026-04-13 23:31:38.365 [INFO][3814] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b562b04fab62d415ad48b97dd24b4848a7492a19c6b3ac8058cd58d403657594" iface="eth0" netns="/var/run/netns/cni-1461ae76-2d6a-b093-a8ba-db97eb0d710c" Apr 13 23:31:45.835489 containerd[1460]: 2026-04-13 23:31:38.365 [INFO][3814] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b562b04fab62d415ad48b97dd24b4848a7492a19c6b3ac8058cd58d403657594" iface="eth0" netns="/var/run/netns/cni-1461ae76-2d6a-b093-a8ba-db97eb0d710c" Apr 13 23:31:45.835489 containerd[1460]: 2026-04-13 23:31:38.365 [INFO][3814] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="b562b04fab62d415ad48b97dd24b4848a7492a19c6b3ac8058cd58d403657594" iface="eth0" netns="/var/run/netns/cni-1461ae76-2d6a-b093-a8ba-db97eb0d710c" Apr 13 23:31:45.835489 containerd[1460]: 2026-04-13 23:31:38.365 [INFO][3814] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="b562b04fab62d415ad48b97dd24b4848a7492a19c6b3ac8058cd58d403657594" Apr 13 23:31:45.835489 containerd[1460]: 2026-04-13 23:31:38.365 [INFO][3814] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="b562b04fab62d415ad48b97dd24b4848a7492a19c6b3ac8058cd58d403657594" Apr 13 23:31:45.835489 containerd[1460]: 2026-04-13 23:31:38.391 [INFO][3951] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="b562b04fab62d415ad48b97dd24b4848a7492a19c6b3ac8058cd58d403657594" HandleID="k8s-pod-network.b562b04fab62d415ad48b97dd24b4848a7492a19c6b3ac8058cd58d403657594" Workload="10.0.0.139-k8s-calico--apiserver--654bb5c945--vfzv5-eth0" Apr 13 23:31:45.835489 containerd[1460]: 2026-04-13 23:31:38.392 [INFO][3951] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 23:31:45.835489 containerd[1460]: 2026-04-13 23:31:44.614 [INFO][3951] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 23:31:45.835489 containerd[1460]: 2026-04-13 23:31:45.563 [WARNING][3951] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="b562b04fab62d415ad48b97dd24b4848a7492a19c6b3ac8058cd58d403657594" HandleID="k8s-pod-network.b562b04fab62d415ad48b97dd24b4848a7492a19c6b3ac8058cd58d403657594" Workload="10.0.0.139-k8s-calico--apiserver--654bb5c945--vfzv5-eth0" Apr 13 23:31:45.835489 containerd[1460]: 2026-04-13 23:31:45.563 [INFO][3951] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="b562b04fab62d415ad48b97dd24b4848a7492a19c6b3ac8058cd58d403657594" HandleID="k8s-pod-network.b562b04fab62d415ad48b97dd24b4848a7492a19c6b3ac8058cd58d403657594" Workload="10.0.0.139-k8s-calico--apiserver--654bb5c945--vfzv5-eth0" Apr 13 23:31:45.835489 containerd[1460]: 2026-04-13 23:31:45.832 [INFO][3951] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 23:31:45.835489 containerd[1460]: 2026-04-13 23:31:45.833 [INFO][3814] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="b562b04fab62d415ad48b97dd24b4848a7492a19c6b3ac8058cd58d403657594" Apr 13 23:31:45.836247 containerd[1460]: time="2026-04-13T23:31:45.835777507Z" level=info msg="TearDown network for sandbox \"b562b04fab62d415ad48b97dd24b4848a7492a19c6b3ac8058cd58d403657594\" successfully" Apr 13 23:31:45.836247 containerd[1460]: time="2026-04-13T23:31:45.835840021Z" level=info msg="StopPodSandbox for \"b562b04fab62d415ad48b97dd24b4848a7492a19c6b3ac8058cd58d403657594\" returns successfully" Apr 13 23:31:45.838034 systemd[1]: run-netns-cni\x2d1461ae76\x2d2d6a\x2db093\x2da8ba\x2ddb97eb0d710c.mount: Deactivated successfully. Apr 13 23:31:45.840331 containerd[1460]: time="2026-04-13T23:31:45.840281904Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-654bb5c945-vfzv5,Uid:e599cfc7-1fc3-4ebb-92b5-f669e8a23ba7,Namespace:calico-system,Attempt:1,}" Apr 13 23:31:46.184414 kubelet[1759]: E0413 23:31:46.184141 1759 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 13 23:31:46.715761 kubelet[1759]: I0413 23:31:46.715708 1759 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 13 23:31:47.184566 kubelet[1759]: E0413 23:31:47.184468 1759 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 13 23:31:48.159147 kubelet[1759]: E0413 23:31:48.159076 1759 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:31:48.185550 kubelet[1759]: E0413 23:31:48.185463 1759 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 13 23:31:48.436535 containerd[1460]: time="2026-04-13T23:31:48.436335109Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 23:31:48.438487 containerd[1460]: time="2026-04-13T23:31:48.438408450Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.31.4: active requests=0, bytes read=52406348" Apr 13 23:31:48.440469 containerd[1460]: time="2026-04-13T23:31:48.440411539Z" level=info msg="ImageCreate event name:\"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 23:31:48.443386 containerd[1460]: time="2026-04-13T23:31:48.443271425Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 23:31:48.443941 containerd[1460]: time="2026-04-13T23:31:48.443914935Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" with image id \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\", size \"53962361\" in 3.454208429s" Apr 13 23:31:48.443941 containerd[1460]: time="2026-04-13T23:31:48.443940990Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" returns image reference \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\"" Apr 13 23:31:48.456616 containerd[1460]: time="2026-04-13T23:31:48.456576098Z" level=info msg="CreateContainer within sandbox \"16e9de07e058a3d848e8bfc15f6c961e97eefd4802cee5a648422a00b5ee4f1f\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Apr 13 23:31:48.476850 containerd[1460]: time="2026-04-13T23:31:48.476751701Z" level=info msg="CreateContainer within sandbox \"16e9de07e058a3d848e8bfc15f6c961e97eefd4802cee5a648422a00b5ee4f1f\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"8b16496d1a665454a29c859ee8671a54c0be4253ea2c952132eb94b3b8717aaf\"" Apr 13 23:31:48.477556 containerd[1460]: time="2026-04-13T23:31:48.477526271Z" level=info msg="StartContainer for \"8b16496d1a665454a29c859ee8671a54c0be4253ea2c952132eb94b3b8717aaf\"" Apr 13 23:31:48.514224 systemd[1]: Started cri-containerd-8b16496d1a665454a29c859ee8671a54c0be4253ea2c952132eb94b3b8717aaf.scope - libcontainer container 8b16496d1a665454a29c859ee8671a54c0be4253ea2c952132eb94b3b8717aaf. Apr 13 23:31:48.562485 containerd[1460]: time="2026-04-13T23:31:48.562379393Z" level=info msg="StartContainer for \"8b16496d1a665454a29c859ee8671a54c0be4253ea2c952132eb94b3b8717aaf\" returns successfully" Apr 13 23:31:49.186523 kubelet[1759]: E0413 23:31:49.186411 1759 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 13 23:31:49.369900 kubelet[1759]: I0413 23:31:49.369630 1759 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-654bb5c945-wpvsk" podStartSLOduration=48.860689633 podStartE2EDuration="52.369596076s" podCreationTimestamp="2026-04-13 23:30:57 +0000 UTC" firstStartedPulling="2026-04-13 23:31:40.435700991 +0000 UTC m=+105.475972147" lastFinishedPulling="2026-04-13 23:31:43.944607432 +0000 UTC m=+108.984878590" observedRunningTime="2026-04-13 23:31:47.019293882 +0000 UTC m=+112.059565037" watchObservedRunningTime="2026-04-13 23:31:49.369596076 +0000 UTC m=+114.409867233" Apr 13 23:31:49.370316 kubelet[1759]: I0413 23:31:49.370260 1759 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-776667d67-kwkmw" podStartSLOduration=44.914412101 podStartE2EDuration="48.370236454s" podCreationTimestamp="2026-04-13 23:31:01 +0000 UTC" firstStartedPulling="2026-04-13 23:31:44.989434785 +0000 UTC m=+110.029705929" lastFinishedPulling="2026-04-13 23:31:48.445259128 +0000 UTC m=+113.485530282" observedRunningTime="2026-04-13 23:31:49.340613895 +0000 UTC m=+114.380885042" watchObservedRunningTime="2026-04-13 23:31:49.370236454 +0000 UTC m=+114.410507619" Apr 13 23:31:50.186954 kubelet[1759]: E0413 23:31:50.186868 1759 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 13 23:31:50.317490 systemd-networkd[1386]: calif175065fb4f: Link UP Apr 13 23:31:50.317739 systemd-networkd[1386]: calif175065fb4f: Gained carrier Apr 13 23:31:51.020651 containerd[1460]: 2026-04-13 23:31:37.687 [INFO][3770] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.139-k8s-csi--node--driver--r2cfg-eth0 csi-node-driver- calico-system db378709-9093-42e0-99e6-ba9beb70b60d 1415 0 2026-04-13 23:31:01 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:98cbb5577 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s 10.0.0.139 csi-node-driver-r2cfg eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calif175065fb4f [] [] }} ContainerID="cff5b5b68ecf5f7f6a6a897742a6b1774250c6dc75d26fd4ba473c85625738e1" Namespace="calico-system" Pod="csi-node-driver-r2cfg" WorkloadEndpoint="10.0.0.139-k8s-csi--node--driver--r2cfg-" Apr 13 23:31:51.020651 containerd[1460]: 2026-04-13 23:31:37.687 [INFO][3770] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="cff5b5b68ecf5f7f6a6a897742a6b1774250c6dc75d26fd4ba473c85625738e1" Namespace="calico-system" Pod="csi-node-driver-r2cfg" WorkloadEndpoint="10.0.0.139-k8s-csi--node--driver--r2cfg-eth0" Apr 13 23:31:51.020651 containerd[1460]: 2026-04-13 23:31:38.299 [INFO][3935] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="cff5b5b68ecf5f7f6a6a897742a6b1774250c6dc75d26fd4ba473c85625738e1" HandleID="k8s-pod-network.cff5b5b68ecf5f7f6a6a897742a6b1774250c6dc75d26fd4ba473c85625738e1" Workload="10.0.0.139-k8s-csi--node--driver--r2cfg-eth0" Apr 13 23:31:51.020651 containerd[1460]: 2026-04-13 23:31:38.484 [INFO][3935] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="cff5b5b68ecf5f7f6a6a897742a6b1774250c6dc75d26fd4ba473c85625738e1" HandleID="k8s-pod-network.cff5b5b68ecf5f7f6a6a897742a6b1774250c6dc75d26fd4ba473c85625738e1" Workload="10.0.0.139-k8s-csi--node--driver--r2cfg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000368f40), Attrs:map[string]string{"namespace":"calico-system", "node":"10.0.0.139", "pod":"csi-node-driver-r2cfg", "timestamp":"2026-04-13 23:31:38.299732091 +0000 UTC"}, Hostname:"10.0.0.139", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0004886e0)} Apr 13 23:31:51.020651 containerd[1460]: 2026-04-13 23:31:38.484 [INFO][3935] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 23:31:51.020651 containerd[1460]: 2026-04-13 23:31:45.832 [INFO][3935] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 23:31:51.020651 containerd[1460]: 2026-04-13 23:31:45.832 [INFO][3935] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.139' Apr 13 23:31:51.020651 containerd[1460]: 2026-04-13 23:31:46.288 [INFO][3935] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.cff5b5b68ecf5f7f6a6a897742a6b1774250c6dc75d26fd4ba473c85625738e1" host="10.0.0.139" Apr 13 23:31:51.020651 containerd[1460]: 2026-04-13 23:31:48.832 [INFO][3935] ipam/ipam.go 409: Looking up existing affinities for host host="10.0.0.139" Apr 13 23:31:51.020651 containerd[1460]: 2026-04-13 23:31:49.512 [INFO][3935] ipam/ipam.go 526: Trying affinity for 192.168.100.0/26 host="10.0.0.139" Apr 13 23:31:51.020651 containerd[1460]: 2026-04-13 23:31:49.675 [INFO][3935] ipam/ipam.go 160: Attempting to load block cidr=192.168.100.0/26 host="10.0.0.139" Apr 13 23:31:51.020651 containerd[1460]: 2026-04-13 23:31:49.810 [INFO][3935] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.100.0/26 host="10.0.0.139" Apr 13 23:31:51.020651 containerd[1460]: 2026-04-13 23:31:49.810 [INFO][3935] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.100.0/26 handle="k8s-pod-network.cff5b5b68ecf5f7f6a6a897742a6b1774250c6dc75d26fd4ba473c85625738e1" host="10.0.0.139" Apr 13 23:31:51.020651 containerd[1460]: 2026-04-13 23:31:49.947 [INFO][3935] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.cff5b5b68ecf5f7f6a6a897742a6b1774250c6dc75d26fd4ba473c85625738e1 Apr 13 23:31:51.020651 containerd[1460]: 2026-04-13 23:31:50.115 [INFO][3935] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.100.0/26 handle="k8s-pod-network.cff5b5b68ecf5f7f6a6a897742a6b1774250c6dc75d26fd4ba473c85625738e1" host="10.0.0.139" Apr 13 23:31:51.020651 containerd[1460]: 2026-04-13 23:31:50.312 [INFO][3935] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.100.5/26] block=192.168.100.0/26 handle="k8s-pod-network.cff5b5b68ecf5f7f6a6a897742a6b1774250c6dc75d26fd4ba473c85625738e1" host="10.0.0.139" Apr 13 23:31:51.020651 containerd[1460]: 2026-04-13 23:31:50.313 [INFO][3935] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.100.5/26] handle="k8s-pod-network.cff5b5b68ecf5f7f6a6a897742a6b1774250c6dc75d26fd4ba473c85625738e1" host="10.0.0.139" Apr 13 23:31:51.020651 containerd[1460]: 2026-04-13 23:31:50.313 [INFO][3935] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 23:31:51.020651 containerd[1460]: 2026-04-13 23:31:50.313 [INFO][3935] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.100.5/26] IPv6=[] ContainerID="cff5b5b68ecf5f7f6a6a897742a6b1774250c6dc75d26fd4ba473c85625738e1" HandleID="k8s-pod-network.cff5b5b68ecf5f7f6a6a897742a6b1774250c6dc75d26fd4ba473c85625738e1" Workload="10.0.0.139-k8s-csi--node--driver--r2cfg-eth0" Apr 13 23:31:51.023354 containerd[1460]: 2026-04-13 23:31:50.315 [INFO][3770] cni-plugin/k8s.go 418: Populated endpoint ContainerID="cff5b5b68ecf5f7f6a6a897742a6b1774250c6dc75d26fd4ba473c85625738e1" Namespace="calico-system" Pod="csi-node-driver-r2cfg" WorkloadEndpoint="10.0.0.139-k8s-csi--node--driver--r2cfg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.139-k8s-csi--node--driver--r2cfg-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"db378709-9093-42e0-99e6-ba9beb70b60d", ResourceVersion:"1415", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 23, 31, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"98cbb5577", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.139", ContainerID:"", Pod:"csi-node-driver-r2cfg", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.100.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calif175065fb4f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 23:31:51.023354 containerd[1460]: 2026-04-13 23:31:50.315 [INFO][3770] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.100.5/32] ContainerID="cff5b5b68ecf5f7f6a6a897742a6b1774250c6dc75d26fd4ba473c85625738e1" Namespace="calico-system" Pod="csi-node-driver-r2cfg" WorkloadEndpoint="10.0.0.139-k8s-csi--node--driver--r2cfg-eth0" Apr 13 23:31:51.023354 containerd[1460]: 2026-04-13 23:31:50.315 [INFO][3770] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif175065fb4f ContainerID="cff5b5b68ecf5f7f6a6a897742a6b1774250c6dc75d26fd4ba473c85625738e1" Namespace="calico-system" Pod="csi-node-driver-r2cfg" WorkloadEndpoint="10.0.0.139-k8s-csi--node--driver--r2cfg-eth0" Apr 13 23:31:51.023354 containerd[1460]: 2026-04-13 23:31:50.318 [INFO][3770] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="cff5b5b68ecf5f7f6a6a897742a6b1774250c6dc75d26fd4ba473c85625738e1" Namespace="calico-system" Pod="csi-node-driver-r2cfg" WorkloadEndpoint="10.0.0.139-k8s-csi--node--driver--r2cfg-eth0" Apr 13 23:31:51.023354 containerd[1460]: 2026-04-13 23:31:50.318 [INFO][3770] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="cff5b5b68ecf5f7f6a6a897742a6b1774250c6dc75d26fd4ba473c85625738e1" Namespace="calico-system" Pod="csi-node-driver-r2cfg" WorkloadEndpoint="10.0.0.139-k8s-csi--node--driver--r2cfg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.139-k8s-csi--node--driver--r2cfg-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"db378709-9093-42e0-99e6-ba9beb70b60d", ResourceVersion:"1415", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 23, 31, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"98cbb5577", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.139", ContainerID:"cff5b5b68ecf5f7f6a6a897742a6b1774250c6dc75d26fd4ba473c85625738e1", Pod:"csi-node-driver-r2cfg", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.100.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calif175065fb4f", MAC:"12:57:25:4d:17:0f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 23:31:51.023354 containerd[1460]: 2026-04-13 23:31:51.017 [INFO][3770] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="cff5b5b68ecf5f7f6a6a897742a6b1774250c6dc75d26fd4ba473c85625738e1" Namespace="calico-system" Pod="csi-node-driver-r2cfg" WorkloadEndpoint="10.0.0.139-k8s-csi--node--driver--r2cfg-eth0" Apr 13 23:31:51.107742 containerd[1460]: time="2026-04-13T23:31:51.107553634Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 23:31:51.107742 containerd[1460]: time="2026-04-13T23:31:51.107674827Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 23:31:51.107742 containerd[1460]: time="2026-04-13T23:31:51.107719056Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 23:31:51.108070 containerd[1460]: time="2026-04-13T23:31:51.107875566Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 23:31:51.133290 systemd[1]: Started cri-containerd-cff5b5b68ecf5f7f6a6a897742a6b1774250c6dc75d26fd4ba473c85625738e1.scope - libcontainer container cff5b5b68ecf5f7f6a6a897742a6b1774250c6dc75d26fd4ba473c85625738e1. Apr 13 23:31:51.147718 systemd-resolved[1389]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 13 23:31:51.161954 containerd[1460]: time="2026-04-13T23:31:51.161886819Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-r2cfg,Uid:db378709-9093-42e0-99e6-ba9beb70b60d,Namespace:calico-system,Attempt:1,} returns sandbox id \"cff5b5b68ecf5f7f6a6a897742a6b1774250c6dc75d26fd4ba473c85625738e1\"" Apr 13 23:31:51.164207 containerd[1460]: time="2026-04-13T23:31:51.163994009Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\"" Apr 13 23:31:51.187501 kubelet[1759]: E0413 23:31:51.187424 1759 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 13 23:31:51.809056 systemd-networkd[1386]: calif175065fb4f: Gained IPv6LL Apr 13 23:31:52.190053 kubelet[1759]: E0413 23:31:52.189399 1759 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 13 23:31:52.473325 systemd-networkd[1386]: cali50a480e20a9: Link UP Apr 13 23:31:52.474184 systemd-networkd[1386]: cali50a480e20a9: Gained carrier Apr 13 23:31:53.100293 containerd[1460]: time="2026-04-13T23:31:53.100208526Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 23:31:53.101155 containerd[1460]: time="2026-04-13T23:31:53.101059734Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.31.4: active requests=0, bytes read=8792502" Apr 13 23:31:53.102930 containerd[1460]: time="2026-04-13T23:31:53.102858865Z" level=info msg="ImageCreate event name:\"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 23:31:53.106124 containerd[1460]: time="2026-04-13T23:31:53.106037392Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 23:31:53.106748 containerd[1460]: time="2026-04-13T23:31:53.106689719Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.31.4\" with image id \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\", repo tag \"ghcr.io/flatcar/calico/csi:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\", size \"10348547\" in 1.94266475s" Apr 13 23:31:53.106748 containerd[1460]: time="2026-04-13T23:31:53.106736737Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\" returns image reference \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\"" Apr 13 23:31:53.111787 containerd[1460]: time="2026-04-13T23:31:53.111720943Z" level=info msg="CreateContainer within sandbox \"cff5b5b68ecf5f7f6a6a897742a6b1774250c6dc75d26fd4ba473c85625738e1\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Apr 13 23:31:53.130242 containerd[1460]: time="2026-04-13T23:31:53.130061999Z" level=info msg="CreateContainer within sandbox \"cff5b5b68ecf5f7f6a6a897742a6b1774250c6dc75d26fd4ba473c85625738e1\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"ec293076904e16542d3c621786868e5fe4a8cfebbafda40b4d4c0d5c42b9f9d2\"" Apr 13 23:31:53.131316 containerd[1460]: time="2026-04-13T23:31:53.131250218Z" level=info msg="StartContainer for \"ec293076904e16542d3c621786868e5fe4a8cfebbafda40b4d4c0d5c42b9f9d2\"" Apr 13 23:31:53.149322 containerd[1460]: 2026-04-13 23:31:41.130 [INFO][4015] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.139-k8s-goldmane--cccfbd5cf--6h5xs-eth0 goldmane-cccfbd5cf- calico-system 6e08b664-aec1-4d1f-aaf5-d1070a1bd37d 1429 0 2026-04-13 23:30:58 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:cccfbd5cf projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s 10.0.0.139 goldmane-cccfbd5cf-6h5xs eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali50a480e20a9 [] [] }} ContainerID="b01d76d6cafe4775b4797136e5e1f2666519caad88eef4a25e1207a379513ca0" Namespace="calico-system" Pod="goldmane-cccfbd5cf-6h5xs" WorkloadEndpoint="10.0.0.139-k8s-goldmane--cccfbd5cf--6h5xs-" Apr 13 23:31:53.149322 containerd[1460]: 2026-04-13 23:31:41.130 [INFO][4015] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b01d76d6cafe4775b4797136e5e1f2666519caad88eef4a25e1207a379513ca0" Namespace="calico-system" Pod="goldmane-cccfbd5cf-6h5xs" WorkloadEndpoint="10.0.0.139-k8s-goldmane--cccfbd5cf--6h5xs-eth0" Apr 13 23:31:53.149322 containerd[1460]: 2026-04-13 23:31:41.871 [INFO][4049] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b01d76d6cafe4775b4797136e5e1f2666519caad88eef4a25e1207a379513ca0" HandleID="k8s-pod-network.b01d76d6cafe4775b4797136e5e1f2666519caad88eef4a25e1207a379513ca0" Workload="10.0.0.139-k8s-goldmane--cccfbd5cf--6h5xs-eth0" Apr 13 23:31:53.149322 containerd[1460]: 2026-04-13 23:31:42.068 [INFO][4049] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="b01d76d6cafe4775b4797136e5e1f2666519caad88eef4a25e1207a379513ca0" HandleID="k8s-pod-network.b01d76d6cafe4775b4797136e5e1f2666519caad88eef4a25e1207a379513ca0" Workload="10.0.0.139-k8s-goldmane--cccfbd5cf--6h5xs-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000119cb0), Attrs:map[string]string{"namespace":"calico-system", "node":"10.0.0.139", "pod":"goldmane-cccfbd5cf-6h5xs", "timestamp":"2026-04-13 23:31:41.871439771 +0000 UTC"}, Hostname:"10.0.0.139", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000312000)} Apr 13 23:31:53.149322 containerd[1460]: 2026-04-13 23:31:42.068 [INFO][4049] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 23:31:53.149322 containerd[1460]: 2026-04-13 23:31:50.313 [INFO][4049] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 23:31:53.149322 containerd[1460]: 2026-04-13 23:31:50.313 [INFO][4049] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.139' Apr 13 23:31:53.149322 containerd[1460]: 2026-04-13 23:31:50.516 [INFO][4049] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.b01d76d6cafe4775b4797136e5e1f2666519caad88eef4a25e1207a379513ca0" host="10.0.0.139" Apr 13 23:31:53.149322 containerd[1460]: 2026-04-13 23:31:51.274 [INFO][4049] ipam/ipam.go 409: Looking up existing affinities for host host="10.0.0.139" Apr 13 23:31:53.149322 containerd[1460]: 2026-04-13 23:31:51.603 [INFO][4049] ipam/ipam.go 526: Trying affinity for 192.168.100.0/26 host="10.0.0.139" Apr 13 23:31:53.149322 containerd[1460]: 2026-04-13 23:31:51.752 [INFO][4049] ipam/ipam.go 160: Attempting to load block cidr=192.168.100.0/26 host="10.0.0.139" Apr 13 23:31:53.149322 containerd[1460]: 2026-04-13 23:31:51.848 [INFO][4049] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.100.0/26 host="10.0.0.139" Apr 13 23:31:53.149322 containerd[1460]: 2026-04-13 23:31:51.848 [INFO][4049] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.100.0/26 handle="k8s-pod-network.b01d76d6cafe4775b4797136e5e1f2666519caad88eef4a25e1207a379513ca0" host="10.0.0.139" Apr 13 23:31:53.149322 containerd[1460]: 2026-04-13 23:31:51.921 [INFO][4049] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.b01d76d6cafe4775b4797136e5e1f2666519caad88eef4a25e1207a379513ca0 Apr 13 23:31:53.149322 containerd[1460]: 2026-04-13 23:31:52.117 [INFO][4049] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.100.0/26 handle="k8s-pod-network.b01d76d6cafe4775b4797136e5e1f2666519caad88eef4a25e1207a379513ca0" host="10.0.0.139" Apr 13 23:31:53.149322 containerd[1460]: 2026-04-13 23:31:52.468 [INFO][4049] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.100.6/26] block=192.168.100.0/26 handle="k8s-pod-network.b01d76d6cafe4775b4797136e5e1f2666519caad88eef4a25e1207a379513ca0" host="10.0.0.139" Apr 13 23:31:53.149322 containerd[1460]: 2026-04-13 23:31:52.468 [INFO][4049] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.100.6/26] handle="k8s-pod-network.b01d76d6cafe4775b4797136e5e1f2666519caad88eef4a25e1207a379513ca0" host="10.0.0.139" Apr 13 23:31:53.149322 containerd[1460]: 2026-04-13 23:31:52.468 [INFO][4049] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 23:31:53.149322 containerd[1460]: 2026-04-13 23:31:52.468 [INFO][4049] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.100.6/26] IPv6=[] ContainerID="b01d76d6cafe4775b4797136e5e1f2666519caad88eef4a25e1207a379513ca0" HandleID="k8s-pod-network.b01d76d6cafe4775b4797136e5e1f2666519caad88eef4a25e1207a379513ca0" Workload="10.0.0.139-k8s-goldmane--cccfbd5cf--6h5xs-eth0" Apr 13 23:31:53.150774 containerd[1460]: 2026-04-13 23:31:52.470 [INFO][4015] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b01d76d6cafe4775b4797136e5e1f2666519caad88eef4a25e1207a379513ca0" Namespace="calico-system" Pod="goldmane-cccfbd5cf-6h5xs" WorkloadEndpoint="10.0.0.139-k8s-goldmane--cccfbd5cf--6h5xs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.139-k8s-goldmane--cccfbd5cf--6h5xs-eth0", GenerateName:"goldmane-cccfbd5cf-", Namespace:"calico-system", SelfLink:"", UID:"6e08b664-aec1-4d1f-aaf5-d1070a1bd37d", ResourceVersion:"1429", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 23, 30, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"cccfbd5cf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.139", ContainerID:"", Pod:"goldmane-cccfbd5cf-6h5xs", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.100.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali50a480e20a9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 23:31:53.150774 containerd[1460]: 2026-04-13 23:31:52.471 [INFO][4015] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.100.6/32] ContainerID="b01d76d6cafe4775b4797136e5e1f2666519caad88eef4a25e1207a379513ca0" Namespace="calico-system" Pod="goldmane-cccfbd5cf-6h5xs" WorkloadEndpoint="10.0.0.139-k8s-goldmane--cccfbd5cf--6h5xs-eth0" Apr 13 23:31:53.150774 containerd[1460]: 2026-04-13 23:31:52.471 [INFO][4015] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali50a480e20a9 ContainerID="b01d76d6cafe4775b4797136e5e1f2666519caad88eef4a25e1207a379513ca0" Namespace="calico-system" Pod="goldmane-cccfbd5cf-6h5xs" WorkloadEndpoint="10.0.0.139-k8s-goldmane--cccfbd5cf--6h5xs-eth0" Apr 13 23:31:53.150774 containerd[1460]: 2026-04-13 23:31:52.474 [INFO][4015] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b01d76d6cafe4775b4797136e5e1f2666519caad88eef4a25e1207a379513ca0" Namespace="calico-system" Pod="goldmane-cccfbd5cf-6h5xs" WorkloadEndpoint="10.0.0.139-k8s-goldmane--cccfbd5cf--6h5xs-eth0" Apr 13 23:31:53.150774 containerd[1460]: 2026-04-13 23:31:52.475 [INFO][4015] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b01d76d6cafe4775b4797136e5e1f2666519caad88eef4a25e1207a379513ca0" Namespace="calico-system" Pod="goldmane-cccfbd5cf-6h5xs" WorkloadEndpoint="10.0.0.139-k8s-goldmane--cccfbd5cf--6h5xs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.139-k8s-goldmane--cccfbd5cf--6h5xs-eth0", GenerateName:"goldmane-cccfbd5cf-", Namespace:"calico-system", SelfLink:"", UID:"6e08b664-aec1-4d1f-aaf5-d1070a1bd37d", ResourceVersion:"1429", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 23, 30, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"cccfbd5cf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.139", ContainerID:"b01d76d6cafe4775b4797136e5e1f2666519caad88eef4a25e1207a379513ca0", Pod:"goldmane-cccfbd5cf-6h5xs", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.100.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali50a480e20a9", MAC:"b6:98:55:46:8c:9a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 23:31:53.150774 containerd[1460]: 2026-04-13 23:31:53.147 [INFO][4015] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b01d76d6cafe4775b4797136e5e1f2666519caad88eef4a25e1207a379513ca0" Namespace="calico-system" Pod="goldmane-cccfbd5cf-6h5xs" WorkloadEndpoint="10.0.0.139-k8s-goldmane--cccfbd5cf--6h5xs-eth0" Apr 13 23:31:53.172151 systemd[1]: Started cri-containerd-ec293076904e16542d3c621786868e5fe4a8cfebbafda40b4d4c0d5c42b9f9d2.scope - libcontainer container ec293076904e16542d3c621786868e5fe4a8cfebbafda40b4d4c0d5c42b9f9d2. Apr 13 23:31:53.177871 containerd[1460]: time="2026-04-13T23:31:53.177635215Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 23:31:53.177871 containerd[1460]: time="2026-04-13T23:31:53.177758322Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 23:31:53.177871 containerd[1460]: time="2026-04-13T23:31:53.177777287Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 23:31:53.178970 containerd[1460]: time="2026-04-13T23:31:53.177966668Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 23:31:53.190665 kubelet[1759]: E0413 23:31:53.190609 1759 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 13 23:31:53.207677 systemd[1]: Started cri-containerd-b01d76d6cafe4775b4797136e5e1f2666519caad88eef4a25e1207a379513ca0.scope - libcontainer container b01d76d6cafe4775b4797136e5e1f2666519caad88eef4a25e1207a379513ca0. Apr 13 23:31:53.212484 containerd[1460]: time="2026-04-13T23:31:53.212444276Z" level=info msg="StartContainer for \"ec293076904e16542d3c621786868e5fe4a8cfebbafda40b4d4c0d5c42b9f9d2\" returns successfully" Apr 13 23:31:53.217273 containerd[1460]: time="2026-04-13T23:31:53.217054437Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\"" Apr 13 23:31:53.225048 systemd-resolved[1389]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 13 23:31:53.257348 containerd[1460]: time="2026-04-13T23:31:53.257275651Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-cccfbd5cf-6h5xs,Uid:6e08b664-aec1-4d1f-aaf5-d1070a1bd37d,Namespace:calico-system,Attempt:1,} returns sandbox id \"b01d76d6cafe4775b4797136e5e1f2666519caad88eef4a25e1207a379513ca0\"" Apr 13 23:31:53.920143 systemd-networkd[1386]: cali50a480e20a9: Gained IPv6LL Apr 13 23:31:54.193043 kubelet[1759]: E0413 23:31:54.192152 1759 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 13 23:31:55.192955 kubelet[1759]: E0413 23:31:55.192884 1759 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 13 23:31:55.239705 containerd[1460]: time="2026-04-13T23:31:55.239618029Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 23:31:55.241314 containerd[1460]: time="2026-04-13T23:31:55.241135182Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4: active requests=0, bytes read=14704317" Apr 13 23:31:55.243301 containerd[1460]: time="2026-04-13T23:31:55.243092134Z" level=info msg="ImageCreate event name:\"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 23:31:55.246477 containerd[1460]: time="2026-04-13T23:31:55.246410630Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 23:31:55.247244 containerd[1460]: time="2026-04-13T23:31:55.247195846Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" with image id \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\", size \"16260314\" in 2.030090244s" Apr 13 23:31:55.247244 containerd[1460]: time="2026-04-13T23:31:55.247233327Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" returns image reference \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\"" Apr 13 23:31:55.249361 containerd[1460]: time="2026-04-13T23:31:55.249044118Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\"" Apr 13 23:31:55.252925 containerd[1460]: time="2026-04-13T23:31:55.252891623Z" level=info msg="CreateContainer within sandbox \"cff5b5b68ecf5f7f6a6a897742a6b1774250c6dc75d26fd4ba473c85625738e1\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Apr 13 23:31:55.266532 containerd[1460]: time="2026-04-13T23:31:55.266456964Z" level=info msg="CreateContainer within sandbox \"cff5b5b68ecf5f7f6a6a897742a6b1774250c6dc75d26fd4ba473c85625738e1\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"158466a43ff57040dedfb1b3c0b6c02c3633e365082946c3ee5655796fed7798\"" Apr 13 23:31:55.267241 containerd[1460]: time="2026-04-13T23:31:55.267213509Z" level=info msg="StartContainer for \"158466a43ff57040dedfb1b3c0b6c02c3633e365082946c3ee5655796fed7798\"" Apr 13 23:31:55.304037 systemd[1]: Started cri-containerd-158466a43ff57040dedfb1b3c0b6c02c3633e365082946c3ee5655796fed7798.scope - libcontainer container 158466a43ff57040dedfb1b3c0b6c02c3633e365082946c3ee5655796fed7798. Apr 13 23:31:55.334442 containerd[1460]: time="2026-04-13T23:31:55.334362654Z" level=info msg="StartContainer for \"158466a43ff57040dedfb1b3c0b6c02c3633e365082946c3ee5655796fed7798\" returns successfully" Apr 13 23:31:55.979931 kubelet[1759]: E0413 23:31:55.979871 1759 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 13 23:31:56.006556 containerd[1460]: time="2026-04-13T23:31:56.006496662Z" level=info msg="StopPodSandbox for \"3d7111feb919ab4d4213d8afd7ee0b044f019fbf19eb1bf5ce7c53ec5a273c62\"" Apr 13 23:31:56.193304 kubelet[1759]: E0413 23:31:56.193211 1759 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 13 23:31:56.320099 kubelet[1759]: I0413 23:31:56.319637 1759 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Apr 13 23:31:56.321255 kubelet[1759]: I0413 23:31:56.321230 1759 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Apr 13 23:31:57.194390 kubelet[1759]: E0413 23:31:57.194273 1759 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 13 23:31:57.691131 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1453539964.mount: Deactivated successfully. Apr 13 23:31:58.005172 containerd[1460]: time="2026-04-13T23:31:58.005008339Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 23:31:58.006055 containerd[1460]: time="2026-04-13T23:31:58.005985145Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.31.4: active requests=0, bytes read=55623386" Apr 13 23:31:58.007406 containerd[1460]: time="2026-04-13T23:31:58.007372642Z" level=info msg="ImageCreate event name:\"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 23:31:58.009484 containerd[1460]: time="2026-04-13T23:31:58.009415277Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 23:31:58.010443 containerd[1460]: time="2026-04-13T23:31:58.010384549Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" with image id \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\", size \"55623232\" in 2.76131081s" Apr 13 23:31:58.010443 containerd[1460]: time="2026-04-13T23:31:58.010426075Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" returns image reference \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\"" Apr 13 23:31:58.014588 containerd[1460]: time="2026-04-13T23:31:58.014547064Z" level=info msg="CreateContainer within sandbox \"b01d76d6cafe4775b4797136e5e1f2666519caad88eef4a25e1207a379513ca0\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Apr 13 23:31:58.033178 containerd[1460]: time="2026-04-13T23:31:58.033101363Z" level=info msg="CreateContainer within sandbox \"b01d76d6cafe4775b4797136e5e1f2666519caad88eef4a25e1207a379513ca0\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"9bb3b66ec471b1836e0d976201236e8e3b6bd9da521544bf1cccb2cf6e40e118\"" Apr 13 23:31:58.033759 containerd[1460]: time="2026-04-13T23:31:58.033731280Z" level=info msg="StartContainer for \"9bb3b66ec471b1836e0d976201236e8e3b6bd9da521544bf1cccb2cf6e40e118\"" Apr 13 23:31:58.065060 systemd[1]: Started cri-containerd-9bb3b66ec471b1836e0d976201236e8e3b6bd9da521544bf1cccb2cf6e40e118.scope - libcontainer container 9bb3b66ec471b1836e0d976201236e8e3b6bd9da521544bf1cccb2cf6e40e118. Apr 13 23:31:58.109953 containerd[1460]: time="2026-04-13T23:31:58.109881683Z" level=info msg="StartContainer for \"9bb3b66ec471b1836e0d976201236e8e3b6bd9da521544bf1cccb2cf6e40e118\" returns successfully" Apr 13 23:31:58.194981 kubelet[1759]: E0413 23:31:58.194891 1759 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 13 23:31:58.656453 systemd-networkd[1386]: calie1bfdf8326a: Link UP Apr 13 23:31:58.668323 systemd-networkd[1386]: calie1bfdf8326a: Gained carrier Apr 13 23:31:58.743230 containerd[1460]: 2026-04-13 23:31:46.048 [INFO][4124] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.139-k8s-coredns--66bc5c9577--bwj28-eth0 coredns-66bc5c9577- kube-system 30fde92e-28ad-486a-acd9-ae88e3ee265a 1434 0 2026-04-13 23:28:16 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s 10.0.0.139 coredns-66bc5c9577-bwj28 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calie1bfdf8326a [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="d3c37b7014c8e35f4315cceb118f23f2b2174bcbe1c46186f0d134579a6c7894" Namespace="kube-system" Pod="coredns-66bc5c9577-bwj28" WorkloadEndpoint="10.0.0.139-k8s-coredns--66bc5c9577--bwj28-" Apr 13 23:31:58.743230 containerd[1460]: 2026-04-13 23:31:46.048 [INFO][4124] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d3c37b7014c8e35f4315cceb118f23f2b2174bcbe1c46186f0d134579a6c7894" Namespace="kube-system" Pod="coredns-66bc5c9577-bwj28" WorkloadEndpoint="10.0.0.139-k8s-coredns--66bc5c9577--bwj28-eth0" Apr 13 23:31:58.743230 containerd[1460]: 2026-04-13 23:31:47.786 [INFO][4211] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d3c37b7014c8e35f4315cceb118f23f2b2174bcbe1c46186f0d134579a6c7894" HandleID="k8s-pod-network.d3c37b7014c8e35f4315cceb118f23f2b2174bcbe1c46186f0d134579a6c7894" Workload="10.0.0.139-k8s-coredns--66bc5c9577--bwj28-eth0" Apr 13 23:31:58.743230 containerd[1460]: 2026-04-13 23:31:49.167 [INFO][4211] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="d3c37b7014c8e35f4315cceb118f23f2b2174bcbe1c46186f0d134579a6c7894" HandleID="k8s-pod-network.d3c37b7014c8e35f4315cceb118f23f2b2174bcbe1c46186f0d134579a6c7894" Workload="10.0.0.139-k8s-coredns--66bc5c9577--bwj28-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004fb50), Attrs:map[string]string{"namespace":"kube-system", "node":"10.0.0.139", "pod":"coredns-66bc5c9577-bwj28", "timestamp":"2026-04-13 23:31:47.786150219 +0000 UTC"}, Hostname:"10.0.0.139", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0003591e0)} Apr 13 23:31:58.743230 containerd[1460]: 2026-04-13 23:31:49.168 [INFO][4211] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 23:31:58.743230 containerd[1460]: 2026-04-13 23:31:52.468 [INFO][4211] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 23:31:58.743230 containerd[1460]: 2026-04-13 23:31:52.469 [INFO][4211] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.139' Apr 13 23:31:58.743230 containerd[1460]: 2026-04-13 23:31:52.736 [INFO][4211] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.d3c37b7014c8e35f4315cceb118f23f2b2174bcbe1c46186f0d134579a6c7894" host="10.0.0.139" Apr 13 23:31:58.743230 containerd[1460]: 2026-04-13 23:31:53.218 [INFO][4211] ipam/ipam.go 409: Looking up existing affinities for host host="10.0.0.139" Apr 13 23:31:58.743230 containerd[1460]: 2026-04-13 23:31:53.607 [INFO][4211] ipam/ipam.go 526: Trying affinity for 192.168.100.0/26 host="10.0.0.139" Apr 13 23:31:58.743230 containerd[1460]: 2026-04-13 23:31:53.803 [INFO][4211] ipam/ipam.go 160: Attempting to load block cidr=192.168.100.0/26 host="10.0.0.139" Apr 13 23:31:58.743230 containerd[1460]: 2026-04-13 23:31:53.982 [INFO][4211] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.100.0/26 host="10.0.0.139" Apr 13 23:31:58.743230 containerd[1460]: 2026-04-13 23:31:53.983 [INFO][4211] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.100.0/26 handle="k8s-pod-network.d3c37b7014c8e35f4315cceb118f23f2b2174bcbe1c46186f0d134579a6c7894" host="10.0.0.139" Apr 13 23:31:58.743230 containerd[1460]: 2026-04-13 23:31:54.065 [INFO][4211] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.d3c37b7014c8e35f4315cceb118f23f2b2174bcbe1c46186f0d134579a6c7894 Apr 13 23:31:58.743230 containerd[1460]: 2026-04-13 23:31:54.909 [INFO][4211] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.100.0/26 handle="k8s-pod-network.d3c37b7014c8e35f4315cceb118f23f2b2174bcbe1c46186f0d134579a6c7894" host="10.0.0.139" Apr 13 23:31:58.743230 containerd[1460]: 2026-04-13 23:31:58.640 [INFO][4211] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.100.7/26] block=192.168.100.0/26 handle="k8s-pod-network.d3c37b7014c8e35f4315cceb118f23f2b2174bcbe1c46186f0d134579a6c7894" host="10.0.0.139" Apr 13 23:31:58.743230 containerd[1460]: 2026-04-13 23:31:58.640 [INFO][4211] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.100.7/26] handle="k8s-pod-network.d3c37b7014c8e35f4315cceb118f23f2b2174bcbe1c46186f0d134579a6c7894" host="10.0.0.139" Apr 13 23:31:58.743230 containerd[1460]: 2026-04-13 23:31:58.641 [INFO][4211] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 23:31:58.743230 containerd[1460]: 2026-04-13 23:31:58.641 [INFO][4211] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.100.7/26] IPv6=[] ContainerID="d3c37b7014c8e35f4315cceb118f23f2b2174bcbe1c46186f0d134579a6c7894" HandleID="k8s-pod-network.d3c37b7014c8e35f4315cceb118f23f2b2174bcbe1c46186f0d134579a6c7894" Workload="10.0.0.139-k8s-coredns--66bc5c9577--bwj28-eth0" Apr 13 23:31:58.744061 containerd[1460]: 2026-04-13 23:31:58.649 [INFO][4124] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d3c37b7014c8e35f4315cceb118f23f2b2174bcbe1c46186f0d134579a6c7894" Namespace="kube-system" Pod="coredns-66bc5c9577-bwj28" WorkloadEndpoint="10.0.0.139-k8s-coredns--66bc5c9577--bwj28-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.139-k8s-coredns--66bc5c9577--bwj28-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"30fde92e-28ad-486a-acd9-ae88e3ee265a", ResourceVersion:"1434", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 23, 28, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.139", ContainerID:"", Pod:"coredns-66bc5c9577-bwj28", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.100.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie1bfdf8326a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 23:31:58.744061 containerd[1460]: 2026-04-13 23:31:58.649 [INFO][4124] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.100.7/32] ContainerID="d3c37b7014c8e35f4315cceb118f23f2b2174bcbe1c46186f0d134579a6c7894" Namespace="kube-system" Pod="coredns-66bc5c9577-bwj28" WorkloadEndpoint="10.0.0.139-k8s-coredns--66bc5c9577--bwj28-eth0" Apr 13 23:31:58.744061 containerd[1460]: 2026-04-13 23:31:58.649 [INFO][4124] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie1bfdf8326a ContainerID="d3c37b7014c8e35f4315cceb118f23f2b2174bcbe1c46186f0d134579a6c7894" Namespace="kube-system" Pod="coredns-66bc5c9577-bwj28" WorkloadEndpoint="10.0.0.139-k8s-coredns--66bc5c9577--bwj28-eth0" Apr 13 23:31:58.744061 containerd[1460]: 2026-04-13 23:31:58.663 [INFO][4124] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d3c37b7014c8e35f4315cceb118f23f2b2174bcbe1c46186f0d134579a6c7894" Namespace="kube-system" Pod="coredns-66bc5c9577-bwj28" WorkloadEndpoint="10.0.0.139-k8s-coredns--66bc5c9577--bwj28-eth0" Apr 13 23:31:58.744061 containerd[1460]: 2026-04-13 23:31:58.663 [INFO][4124] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d3c37b7014c8e35f4315cceb118f23f2b2174bcbe1c46186f0d134579a6c7894" Namespace="kube-system" Pod="coredns-66bc5c9577-bwj28" WorkloadEndpoint="10.0.0.139-k8s-coredns--66bc5c9577--bwj28-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.139-k8s-coredns--66bc5c9577--bwj28-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"30fde92e-28ad-486a-acd9-ae88e3ee265a", ResourceVersion:"1434", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 23, 28, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.139", ContainerID:"d3c37b7014c8e35f4315cceb118f23f2b2174bcbe1c46186f0d134579a6c7894", Pod:"coredns-66bc5c9577-bwj28", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.100.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie1bfdf8326a", MAC:"66:29:fd:75:51:61", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 23:31:58.744061 containerd[1460]: 2026-04-13 23:31:58.731 [INFO][4124] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d3c37b7014c8e35f4315cceb118f23f2b2174bcbe1c46186f0d134579a6c7894" Namespace="kube-system" Pod="coredns-66bc5c9577-bwj28" WorkloadEndpoint="10.0.0.139-k8s-coredns--66bc5c9577--bwj28-eth0" Apr 13 23:31:58.750984 kubelet[1759]: I0413 23:31:58.748993 1759 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-r2cfg" podStartSLOduration=53.66385945 podStartE2EDuration="57.748972476s" podCreationTimestamp="2026-04-13 23:31:01 +0000 UTC" firstStartedPulling="2026-04-13 23:31:51.163526913 +0000 UTC m=+116.203798068" lastFinishedPulling="2026-04-13 23:31:55.248639935 +0000 UTC m=+120.288911094" observedRunningTime="2026-04-13 23:31:58.738970829 +0000 UTC m=+123.779241999" watchObservedRunningTime="2026-04-13 23:31:58.748972476 +0000 UTC m=+123.789243669" Apr 13 23:31:58.878868 containerd[1460]: time="2026-04-13T23:31:58.878657102Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 23:31:58.884546 containerd[1460]: time="2026-04-13T23:31:58.884200683Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 23:31:58.884546 containerd[1460]: time="2026-04-13T23:31:58.884315800Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 23:31:58.884849 containerd[1460]: time="2026-04-13T23:31:58.884508138Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 23:31:58.967947 systemd[1]: run-containerd-runc-k8s.io-9bb3b66ec471b1836e0d976201236e8e3b6bd9da521544bf1cccb2cf6e40e118-runc.D9g3B8.mount: Deactivated successfully. Apr 13 23:31:58.987714 systemd[1]: Started cri-containerd-d3c37b7014c8e35f4315cceb118f23f2b2174bcbe1c46186f0d134579a6c7894.scope - libcontainer container d3c37b7014c8e35f4315cceb118f23f2b2174bcbe1c46186f0d134579a6c7894. Apr 13 23:31:59.031461 systemd-resolved[1389]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 13 23:31:59.134877 kubelet[1759]: I0413 23:31:59.133922 1759 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-cccfbd5cf-6h5xs" podStartSLOduration=56.381394841 podStartE2EDuration="1m1.133884238s" podCreationTimestamp="2026-04-13 23:30:58 +0000 UTC" firstStartedPulling="2026-04-13 23:31:53.258859621 +0000 UTC m=+118.299130765" lastFinishedPulling="2026-04-13 23:31:58.011349006 +0000 UTC m=+123.051620162" observedRunningTime="2026-04-13 23:31:59.13267671 +0000 UTC m=+124.172947892" watchObservedRunningTime="2026-04-13 23:31:59.133884238 +0000 UTC m=+124.174155404" Apr 13 23:31:59.145403 containerd[1460]: time="2026-04-13T23:31:59.145156246Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-bwj28,Uid:30fde92e-28ad-486a-acd9-ae88e3ee265a,Namespace:kube-system,Attempt:1,} returns sandbox id \"d3c37b7014c8e35f4315cceb118f23f2b2174bcbe1c46186f0d134579a6c7894\"" Apr 13 23:31:59.147535 kubelet[1759]: E0413 23:31:59.146148 1759 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:31:59.162456 containerd[1460]: time="2026-04-13T23:31:59.162297012Z" level=info msg="CreateContainer within sandbox \"d3c37b7014c8e35f4315cceb118f23f2b2174bcbe1c46186f0d134579a6c7894\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 13 23:31:59.197353 kubelet[1759]: E0413 23:31:59.197241 1759 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 13 23:31:59.211592 containerd[1460]: time="2026-04-13T23:31:59.211504500Z" level=info msg="CreateContainer within sandbox \"d3c37b7014c8e35f4315cceb118f23f2b2174bcbe1c46186f0d134579a6c7894\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"428203d2bdfec3660814fbd4142bbe06289ccaad43fad5361a51f88e2e0a4c1d\"" Apr 13 23:31:59.212495 containerd[1460]: time="2026-04-13T23:31:59.212456957Z" level=info msg="StartContainer for \"428203d2bdfec3660814fbd4142bbe06289ccaad43fad5361a51f88e2e0a4c1d\"" Apr 13 23:31:59.337336 systemd[1]: Started cri-containerd-428203d2bdfec3660814fbd4142bbe06289ccaad43fad5361a51f88e2e0a4c1d.scope - libcontainer container 428203d2bdfec3660814fbd4142bbe06289ccaad43fad5361a51f88e2e0a4c1d. Apr 13 23:31:59.430549 containerd[1460]: time="2026-04-13T23:31:59.430498526Z" level=info msg="StartContainer for \"428203d2bdfec3660814fbd4142bbe06289ccaad43fad5361a51f88e2e0a4c1d\" returns successfully" Apr 13 23:31:59.696006 systemd-networkd[1386]: calia8c1ec24024: Link UP Apr 13 23:31:59.696685 systemd-networkd[1386]: calia8c1ec24024: Gained carrier Apr 13 23:31:59.834579 kubelet[1759]: E0413 23:31:59.834482 1759 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:32:00.035269 containerd[1460]: 2026-04-13 23:31:48.299 [INFO][4196] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.139-k8s-calico--apiserver--654bb5c945--vfzv5-eth0 calico-apiserver-654bb5c945- calico-system e599cfc7-1fc3-4ebb-92b5-f669e8a23ba7 1435 0 2026-04-13 23:30:57 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:654bb5c945 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s 10.0.0.139 calico-apiserver-654bb5c945-vfzv5 eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] calia8c1ec24024 [] [] }} ContainerID="15fdc75d5947a2720ffcfc650d99b9a3f778dbf23d2d7c7e1341a6545950a3f6" Namespace="calico-system" Pod="calico-apiserver-654bb5c945-vfzv5" WorkloadEndpoint="10.0.0.139-k8s-calico--apiserver--654bb5c945--vfzv5-" Apr 13 23:32:00.035269 containerd[1460]: 2026-04-13 23:31:48.299 [INFO][4196] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="15fdc75d5947a2720ffcfc650d99b9a3f778dbf23d2d7c7e1341a6545950a3f6" Namespace="calico-system" Pod="calico-apiserver-654bb5c945-vfzv5" WorkloadEndpoint="10.0.0.139-k8s-calico--apiserver--654bb5c945--vfzv5-eth0" Apr 13 23:32:00.035269 containerd[1460]: 2026-04-13 23:31:49.230 [INFO][4294] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="15fdc75d5947a2720ffcfc650d99b9a3f778dbf23d2d7c7e1341a6545950a3f6" HandleID="k8s-pod-network.15fdc75d5947a2720ffcfc650d99b9a3f778dbf23d2d7c7e1341a6545950a3f6" Workload="10.0.0.139-k8s-calico--apiserver--654bb5c945--vfzv5-eth0" Apr 13 23:32:00.035269 containerd[1460]: 2026-04-13 23:31:49.271 [INFO][4294] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="15fdc75d5947a2720ffcfc650d99b9a3f778dbf23d2d7c7e1341a6545950a3f6" HandleID="k8s-pod-network.15fdc75d5947a2720ffcfc650d99b9a3f778dbf23d2d7c7e1341a6545950a3f6" Workload="10.0.0.139-k8s-calico--apiserver--654bb5c945--vfzv5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0004018c0), Attrs:map[string]string{"namespace":"calico-system", "node":"10.0.0.139", "pod":"calico-apiserver-654bb5c945-vfzv5", "timestamp":"2026-04-13 23:31:49.230741213 +0000 UTC"}, Hostname:"10.0.0.139", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000300dc0)} Apr 13 23:32:00.035269 containerd[1460]: 2026-04-13 23:31:49.271 [INFO][4294] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 23:32:00.035269 containerd[1460]: 2026-04-13 23:31:58.641 [INFO][4294] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 23:32:00.035269 containerd[1460]: 2026-04-13 23:31:58.641 [INFO][4294] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.139' Apr 13 23:32:00.035269 containerd[1460]: 2026-04-13 23:31:58.710 [INFO][4294] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.15fdc75d5947a2720ffcfc650d99b9a3f778dbf23d2d7c7e1341a6545950a3f6" host="10.0.0.139" Apr 13 23:32:00.035269 containerd[1460]: 2026-04-13 23:31:58.999 [INFO][4294] ipam/ipam.go 409: Looking up existing affinities for host host="10.0.0.139" Apr 13 23:32:00.035269 containerd[1460]: 2026-04-13 23:31:59.124 [INFO][4294] ipam/ipam.go 526: Trying affinity for 192.168.100.0/26 host="10.0.0.139" Apr 13 23:32:00.035269 containerd[1460]: 2026-04-13 23:31:59.178 [INFO][4294] ipam/ipam.go 160: Attempting to load block cidr=192.168.100.0/26 host="10.0.0.139" Apr 13 23:32:00.035269 containerd[1460]: 2026-04-13 23:31:59.211 [INFO][4294] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.100.0/26 host="10.0.0.139" Apr 13 23:32:00.035269 containerd[1460]: 2026-04-13 23:31:59.211 [INFO][4294] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.100.0/26 handle="k8s-pod-network.15fdc75d5947a2720ffcfc650d99b9a3f778dbf23d2d7c7e1341a6545950a3f6" host="10.0.0.139" Apr 13 23:32:00.035269 containerd[1460]: 2026-04-13 23:31:59.314 [INFO][4294] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.15fdc75d5947a2720ffcfc650d99b9a3f778dbf23d2d7c7e1341a6545950a3f6 Apr 13 23:32:00.035269 containerd[1460]: 2026-04-13 23:31:59.358 [INFO][4294] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.100.0/26 handle="k8s-pod-network.15fdc75d5947a2720ffcfc650d99b9a3f778dbf23d2d7c7e1341a6545950a3f6" host="10.0.0.139" Apr 13 23:32:00.035269 containerd[1460]: 2026-04-13 23:31:59.664 [INFO][4294] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.100.8/26] block=192.168.100.0/26 handle="k8s-pod-network.15fdc75d5947a2720ffcfc650d99b9a3f778dbf23d2d7c7e1341a6545950a3f6" host="10.0.0.139" Apr 13 23:32:00.035269 containerd[1460]: 2026-04-13 23:31:59.665 [INFO][4294] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.100.8/26] handle="k8s-pod-network.15fdc75d5947a2720ffcfc650d99b9a3f778dbf23d2d7c7e1341a6545950a3f6" host="10.0.0.139" Apr 13 23:32:00.035269 containerd[1460]: 2026-04-13 23:31:59.665 [INFO][4294] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 23:32:00.035269 containerd[1460]: 2026-04-13 23:31:59.665 [INFO][4294] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.100.8/26] IPv6=[] ContainerID="15fdc75d5947a2720ffcfc650d99b9a3f778dbf23d2d7c7e1341a6545950a3f6" HandleID="k8s-pod-network.15fdc75d5947a2720ffcfc650d99b9a3f778dbf23d2d7c7e1341a6545950a3f6" Workload="10.0.0.139-k8s-calico--apiserver--654bb5c945--vfzv5-eth0" Apr 13 23:32:00.035939 containerd[1460]: 2026-04-13 23:31:59.679 [INFO][4196] cni-plugin/k8s.go 418: Populated endpoint ContainerID="15fdc75d5947a2720ffcfc650d99b9a3f778dbf23d2d7c7e1341a6545950a3f6" Namespace="calico-system" Pod="calico-apiserver-654bb5c945-vfzv5" WorkloadEndpoint="10.0.0.139-k8s-calico--apiserver--654bb5c945--vfzv5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.139-k8s-calico--apiserver--654bb5c945--vfzv5-eth0", GenerateName:"calico-apiserver-654bb5c945-", Namespace:"calico-system", SelfLink:"", UID:"e599cfc7-1fc3-4ebb-92b5-f669e8a23ba7", ResourceVersion:"1435", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 23, 30, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"654bb5c945", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.139", ContainerID:"", Pod:"calico-apiserver-654bb5c945-vfzv5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.100.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calia8c1ec24024", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 23:32:00.035939 containerd[1460]: 2026-04-13 23:31:59.686 [INFO][4196] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.100.8/32] ContainerID="15fdc75d5947a2720ffcfc650d99b9a3f778dbf23d2d7c7e1341a6545950a3f6" Namespace="calico-system" Pod="calico-apiserver-654bb5c945-vfzv5" WorkloadEndpoint="10.0.0.139-k8s-calico--apiserver--654bb5c945--vfzv5-eth0" Apr 13 23:32:00.035939 containerd[1460]: 2026-04-13 23:31:59.687 [INFO][4196] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia8c1ec24024 ContainerID="15fdc75d5947a2720ffcfc650d99b9a3f778dbf23d2d7c7e1341a6545950a3f6" Namespace="calico-system" Pod="calico-apiserver-654bb5c945-vfzv5" WorkloadEndpoint="10.0.0.139-k8s-calico--apiserver--654bb5c945--vfzv5-eth0" Apr 13 23:32:00.035939 containerd[1460]: 2026-04-13 23:31:59.699 [INFO][4196] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="15fdc75d5947a2720ffcfc650d99b9a3f778dbf23d2d7c7e1341a6545950a3f6" Namespace="calico-system" Pod="calico-apiserver-654bb5c945-vfzv5" WorkloadEndpoint="10.0.0.139-k8s-calico--apiserver--654bb5c945--vfzv5-eth0" Apr 13 23:32:00.035939 containerd[1460]: 2026-04-13 23:31:59.700 [INFO][4196] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="15fdc75d5947a2720ffcfc650d99b9a3f778dbf23d2d7c7e1341a6545950a3f6" Namespace="calico-system" Pod="calico-apiserver-654bb5c945-vfzv5" WorkloadEndpoint="10.0.0.139-k8s-calico--apiserver--654bb5c945--vfzv5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.139-k8s-calico--apiserver--654bb5c945--vfzv5-eth0", GenerateName:"calico-apiserver-654bb5c945-", Namespace:"calico-system", SelfLink:"", UID:"e599cfc7-1fc3-4ebb-92b5-f669e8a23ba7", ResourceVersion:"1435", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 23, 30, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"654bb5c945", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.139", ContainerID:"15fdc75d5947a2720ffcfc650d99b9a3f778dbf23d2d7c7e1341a6545950a3f6", Pod:"calico-apiserver-654bb5c945-vfzv5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.100.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calia8c1ec24024", MAC:"d2:41:bd:1a:bf:84", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 23:32:00.035939 containerd[1460]: 2026-04-13 23:32:00.026 [INFO][4196] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="15fdc75d5947a2720ffcfc650d99b9a3f778dbf23d2d7c7e1341a6545950a3f6" Namespace="calico-system" Pod="calico-apiserver-654bb5c945-vfzv5" WorkloadEndpoint="10.0.0.139-k8s-calico--apiserver--654bb5c945--vfzv5-eth0" Apr 13 23:32:00.145309 containerd[1460]: time="2026-04-13T23:32:00.143548493Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 23:32:00.145309 containerd[1460]: time="2026-04-13T23:32:00.143691435Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 23:32:00.145309 containerd[1460]: time="2026-04-13T23:32:00.143708558Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 23:32:00.145309 containerd[1460]: time="2026-04-13T23:32:00.144005934Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 23:32:00.200427 kubelet[1759]: E0413 23:32:00.200357 1759 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 13 23:32:00.250903 systemd[1]: Started cri-containerd-15fdc75d5947a2720ffcfc650d99b9a3f778dbf23d2d7c7e1341a6545950a3f6.scope - libcontainer container 15fdc75d5947a2720ffcfc650d99b9a3f778dbf23d2d7c7e1341a6545950a3f6. Apr 13 23:32:00.280398 systemd-resolved[1389]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 13 23:32:00.353379 containerd[1460]: time="2026-04-13T23:32:00.348181879Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-654bb5c945-vfzv5,Uid:e599cfc7-1fc3-4ebb-92b5-f669e8a23ba7,Namespace:calico-system,Attempt:1,} returns sandbox id \"15fdc75d5947a2720ffcfc650d99b9a3f778dbf23d2d7c7e1341a6545950a3f6\"" Apr 13 23:32:00.387218 containerd[1460]: 2026-04-13 23:31:58.775 [WARNING][4578] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3d7111feb919ab4d4213d8afd7ee0b044f019fbf19eb1bf5ce7c53ec5a273c62" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.139-k8s-calico--apiserver--654bb5c945--wpvsk-eth0", GenerateName:"calico-apiserver-654bb5c945-", Namespace:"calico-system", SelfLink:"", UID:"9d2e4322-5d1c-4d78-b98d-4acc3604c168", ResourceVersion:"1474", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 23, 30, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"654bb5c945", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.139", ContainerID:"1d25b235ecc8c69e454cd10e5a29d18583b6821027413f2409d93221d5aadb77", Pod:"calico-apiserver-654bb5c945-wpvsk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.100.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali1aa9eafa915", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 23:32:00.387218 containerd[1460]: 2026-04-13 23:31:58.776 [INFO][4578] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="3d7111feb919ab4d4213d8afd7ee0b044f019fbf19eb1bf5ce7c53ec5a273c62" Apr 13 23:32:00.387218 containerd[1460]: 2026-04-13 23:31:58.776 [INFO][4578] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3d7111feb919ab4d4213d8afd7ee0b044f019fbf19eb1bf5ce7c53ec5a273c62" iface="eth0" netns="" Apr 13 23:32:00.387218 containerd[1460]: 2026-04-13 23:31:58.776 [INFO][4578] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="3d7111feb919ab4d4213d8afd7ee0b044f019fbf19eb1bf5ce7c53ec5a273c62" Apr 13 23:32:00.387218 containerd[1460]: 2026-04-13 23:31:58.776 [INFO][4578] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="3d7111feb919ab4d4213d8afd7ee0b044f019fbf19eb1bf5ce7c53ec5a273c62" Apr 13 23:32:00.387218 containerd[1460]: 2026-04-13 23:31:58.891 [INFO][4660] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="3d7111feb919ab4d4213d8afd7ee0b044f019fbf19eb1bf5ce7c53ec5a273c62" HandleID="k8s-pod-network.3d7111feb919ab4d4213d8afd7ee0b044f019fbf19eb1bf5ce7c53ec5a273c62" Workload="10.0.0.139-k8s-calico--apiserver--654bb5c945--wpvsk-eth0" Apr 13 23:32:00.387218 containerd[1460]: 2026-04-13 23:31:58.891 [INFO][4660] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 23:32:00.387218 containerd[1460]: 2026-04-13 23:31:59.665 [INFO][4660] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 23:32:00.387218 containerd[1460]: 2026-04-13 23:31:59.861 [WARNING][4660] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="3d7111feb919ab4d4213d8afd7ee0b044f019fbf19eb1bf5ce7c53ec5a273c62" HandleID="k8s-pod-network.3d7111feb919ab4d4213d8afd7ee0b044f019fbf19eb1bf5ce7c53ec5a273c62" Workload="10.0.0.139-k8s-calico--apiserver--654bb5c945--wpvsk-eth0" Apr 13 23:32:00.387218 containerd[1460]: 2026-04-13 23:31:59.861 [INFO][4660] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="3d7111feb919ab4d4213d8afd7ee0b044f019fbf19eb1bf5ce7c53ec5a273c62" HandleID="k8s-pod-network.3d7111feb919ab4d4213d8afd7ee0b044f019fbf19eb1bf5ce7c53ec5a273c62" Workload="10.0.0.139-k8s-calico--apiserver--654bb5c945--wpvsk-eth0" Apr 13 23:32:00.387218 containerd[1460]: 2026-04-13 23:32:00.369 [INFO][4660] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 23:32:00.387218 containerd[1460]: 2026-04-13 23:32:00.379 [INFO][4578] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="3d7111feb919ab4d4213d8afd7ee0b044f019fbf19eb1bf5ce7c53ec5a273c62" Apr 13 23:32:00.387218 containerd[1460]: time="2026-04-13T23:32:00.384196271Z" level=info msg="TearDown network for sandbox \"3d7111feb919ab4d4213d8afd7ee0b044f019fbf19eb1bf5ce7c53ec5a273c62\" successfully" Apr 13 23:32:00.387218 containerd[1460]: time="2026-04-13T23:32:00.384235853Z" level=info msg="StopPodSandbox for \"3d7111feb919ab4d4213d8afd7ee0b044f019fbf19eb1bf5ce7c53ec5a273c62\" returns successfully" Apr 13 23:32:00.387218 containerd[1460]: time="2026-04-13T23:32:00.385499599Z" level=info msg="RemovePodSandbox for \"3d7111feb919ab4d4213d8afd7ee0b044f019fbf19eb1bf5ce7c53ec5a273c62\"" Apr 13 23:32:00.387218 containerd[1460]: time="2026-04-13T23:32:00.385537051Z" level=info msg="Forcibly stopping sandbox \"3d7111feb919ab4d4213d8afd7ee0b044f019fbf19eb1bf5ce7c53ec5a273c62\"" Apr 13 23:32:00.386452 systemd-networkd[1386]: calie1bfdf8326a: Gained IPv6LL Apr 13 23:32:00.398342 containerd[1460]: time="2026-04-13T23:32:00.398276455Z" level=info msg="CreateContainer within sandbox \"15fdc75d5947a2720ffcfc650d99b9a3f778dbf23d2d7c7e1341a6545950a3f6\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 13 23:32:00.457037 containerd[1460]: time="2026-04-13T23:32:00.456981509Z" level=info msg="CreateContainer within sandbox \"15fdc75d5947a2720ffcfc650d99b9a3f778dbf23d2d7c7e1341a6545950a3f6\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"07a456ffb528c478c5986949c6197d0d8797b53a2119f524eedfd36c5a4fa00f\"" Apr 13 23:32:00.459312 containerd[1460]: time="2026-04-13T23:32:00.458070445Z" level=info msg="StartContainer for \"07a456ffb528c478c5986949c6197d0d8797b53a2119f524eedfd36c5a4fa00f\"" Apr 13 23:32:00.532242 systemd[1]: Started cri-containerd-07a456ffb528c478c5986949c6197d0d8797b53a2119f524eedfd36c5a4fa00f.scope - libcontainer container 07a456ffb528c478c5986949c6197d0d8797b53a2119f524eedfd36c5a4fa00f. Apr 13 23:32:00.632593 containerd[1460]: time="2026-04-13T23:32:00.631570583Z" level=info msg="StartContainer for \"07a456ffb528c478c5986949c6197d0d8797b53a2119f524eedfd36c5a4fa00f\" returns successfully" Apr 13 23:32:00.853021 kubelet[1759]: E0413 23:32:00.852340 1759 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:32:01.094879 kubelet[1759]: I0413 23:32:01.093523 1759 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-bwj28" podStartSLOduration=225.09349452 podStartE2EDuration="3m45.09349452s" podCreationTimestamp="2026-04-13 23:28:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-13 23:32:00.7353473 +0000 UTC m=+125.775618458" watchObservedRunningTime="2026-04-13 23:32:01.09349452 +0000 UTC m=+126.133765684" Apr 13 23:32:01.205980 kubelet[1759]: E0413 23:32:01.205895 1759 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 13 23:32:01.218897 systemd-networkd[1386]: calia8c1ec24024: Gained IPv6LL Apr 13 23:32:01.325948 containerd[1460]: 2026-04-13 23:32:00.974 [WARNING][4867] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3d7111feb919ab4d4213d8afd7ee0b044f019fbf19eb1bf5ce7c53ec5a273c62" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.139-k8s-calico--apiserver--654bb5c945--wpvsk-eth0", GenerateName:"calico-apiserver-654bb5c945-", Namespace:"calico-system", SelfLink:"", UID:"9d2e4322-5d1c-4d78-b98d-4acc3604c168", ResourceVersion:"1474", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 23, 30, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"654bb5c945", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.139", ContainerID:"1d25b235ecc8c69e454cd10e5a29d18583b6821027413f2409d93221d5aadb77", Pod:"calico-apiserver-654bb5c945-wpvsk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.100.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali1aa9eafa915", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 23:32:01.325948 containerd[1460]: 2026-04-13 23:32:00.978 [INFO][4867] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="3d7111feb919ab4d4213d8afd7ee0b044f019fbf19eb1bf5ce7c53ec5a273c62" Apr 13 23:32:01.325948 containerd[1460]: 2026-04-13 23:32:00.978 [INFO][4867] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3d7111feb919ab4d4213d8afd7ee0b044f019fbf19eb1bf5ce7c53ec5a273c62" iface="eth0" netns="" Apr 13 23:32:01.325948 containerd[1460]: 2026-04-13 23:32:00.978 [INFO][4867] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="3d7111feb919ab4d4213d8afd7ee0b044f019fbf19eb1bf5ce7c53ec5a273c62" Apr 13 23:32:01.325948 containerd[1460]: 2026-04-13 23:32:00.978 [INFO][4867] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="3d7111feb919ab4d4213d8afd7ee0b044f019fbf19eb1bf5ce7c53ec5a273c62" Apr 13 23:32:01.325948 containerd[1460]: 2026-04-13 23:32:01.079 [INFO][4931] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="3d7111feb919ab4d4213d8afd7ee0b044f019fbf19eb1bf5ce7c53ec5a273c62" HandleID="k8s-pod-network.3d7111feb919ab4d4213d8afd7ee0b044f019fbf19eb1bf5ce7c53ec5a273c62" Workload="10.0.0.139-k8s-calico--apiserver--654bb5c945--wpvsk-eth0" Apr 13 23:32:01.325948 containerd[1460]: 2026-04-13 23:32:01.080 [INFO][4931] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 23:32:01.325948 containerd[1460]: 2026-04-13 23:32:01.080 [INFO][4931] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 23:32:01.325948 containerd[1460]: 2026-04-13 23:32:01.230 [WARNING][4931] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="3d7111feb919ab4d4213d8afd7ee0b044f019fbf19eb1bf5ce7c53ec5a273c62" HandleID="k8s-pod-network.3d7111feb919ab4d4213d8afd7ee0b044f019fbf19eb1bf5ce7c53ec5a273c62" Workload="10.0.0.139-k8s-calico--apiserver--654bb5c945--wpvsk-eth0" Apr 13 23:32:01.325948 containerd[1460]: 2026-04-13 23:32:01.230 [INFO][4931] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="3d7111feb919ab4d4213d8afd7ee0b044f019fbf19eb1bf5ce7c53ec5a273c62" HandleID="k8s-pod-network.3d7111feb919ab4d4213d8afd7ee0b044f019fbf19eb1bf5ce7c53ec5a273c62" Workload="10.0.0.139-k8s-calico--apiserver--654bb5c945--wpvsk-eth0" Apr 13 23:32:01.325948 containerd[1460]: 2026-04-13 23:32:01.319 [INFO][4931] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 23:32:01.325948 containerd[1460]: 2026-04-13 23:32:01.322 [INFO][4867] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="3d7111feb919ab4d4213d8afd7ee0b044f019fbf19eb1bf5ce7c53ec5a273c62" Apr 13 23:32:01.325948 containerd[1460]: time="2026-04-13T23:32:01.325202267Z" level=info msg="TearDown network for sandbox \"3d7111feb919ab4d4213d8afd7ee0b044f019fbf19eb1bf5ce7c53ec5a273c62\" successfully" Apr 13 23:32:01.347834 containerd[1460]: time="2026-04-13T23:32:01.347520779Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3d7111feb919ab4d4213d8afd7ee0b044f019fbf19eb1bf5ce7c53ec5a273c62\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 13 23:32:01.347834 containerd[1460]: time="2026-04-13T23:32:01.347638920Z" level=info msg="RemovePodSandbox \"3d7111feb919ab4d4213d8afd7ee0b044f019fbf19eb1bf5ce7c53ec5a273c62\" returns successfully" Apr 13 23:32:01.353483 containerd[1460]: time="2026-04-13T23:32:01.353406105Z" level=info msg="StopPodSandbox for \"46edbdfd0413c9ada98cb42ab4127e3c74aff1738e0c45d4c688689503439c58\"" Apr 13 23:32:01.878849 kubelet[1759]: E0413 23:32:01.877357 1759 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:32:01.878849 kubelet[1759]: I0413 23:32:01.877649 1759 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-654bb5c945-vfzv5" podStartSLOduration=64.877633517 podStartE2EDuration="1m4.877633517s" podCreationTimestamp="2026-04-13 23:30:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-13 23:32:01.571626251 +0000 UTC m=+126.611897414" watchObservedRunningTime="2026-04-13 23:32:01.877633517 +0000 UTC m=+126.917904682" Apr 13 23:32:01.918493 containerd[1460]: 2026-04-13 23:32:01.692 [WARNING][4960] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="46edbdfd0413c9ada98cb42ab4127e3c74aff1738e0c45d4c688689503439c58" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.139-k8s-coredns--66bc5c9577--4fkd7-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"16b5b403-644e-4ee0-ad2f-a49ad057b5c5", ResourceVersion:"1444", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 23, 28, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.139", ContainerID:"6352321dd11e09cc5bbbd91001b760f0cd254743587b60171ee0c890d8ae3712", Pod:"coredns-66bc5c9577-4fkd7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.100.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7cb6ec363e7", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 23:32:01.918493 containerd[1460]: 2026-04-13 23:32:01.692 [INFO][4960] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="46edbdfd0413c9ada98cb42ab4127e3c74aff1738e0c45d4c688689503439c58" Apr 13 23:32:01.918493 containerd[1460]: 2026-04-13 23:32:01.692 [INFO][4960] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="46edbdfd0413c9ada98cb42ab4127e3c74aff1738e0c45d4c688689503439c58" iface="eth0" netns="" Apr 13 23:32:01.918493 containerd[1460]: 2026-04-13 23:32:01.692 [INFO][4960] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="46edbdfd0413c9ada98cb42ab4127e3c74aff1738e0c45d4c688689503439c58" Apr 13 23:32:01.918493 containerd[1460]: 2026-04-13 23:32:01.692 [INFO][4960] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="46edbdfd0413c9ada98cb42ab4127e3c74aff1738e0c45d4c688689503439c58" Apr 13 23:32:01.918493 containerd[1460]: 2026-04-13 23:32:01.762 [INFO][4969] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="46edbdfd0413c9ada98cb42ab4127e3c74aff1738e0c45d4c688689503439c58" HandleID="k8s-pod-network.46edbdfd0413c9ada98cb42ab4127e3c74aff1738e0c45d4c688689503439c58" Workload="10.0.0.139-k8s-coredns--66bc5c9577--4fkd7-eth0" Apr 13 23:32:01.918493 containerd[1460]: 2026-04-13 23:32:01.763 [INFO][4969] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 23:32:01.918493 containerd[1460]: 2026-04-13 23:32:01.763 [INFO][4969] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 23:32:01.918493 containerd[1460]: 2026-04-13 23:32:01.866 [WARNING][4969] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="46edbdfd0413c9ada98cb42ab4127e3c74aff1738e0c45d4c688689503439c58" HandleID="k8s-pod-network.46edbdfd0413c9ada98cb42ab4127e3c74aff1738e0c45d4c688689503439c58" Workload="10.0.0.139-k8s-coredns--66bc5c9577--4fkd7-eth0" Apr 13 23:32:01.918493 containerd[1460]: 2026-04-13 23:32:01.866 [INFO][4969] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="46edbdfd0413c9ada98cb42ab4127e3c74aff1738e0c45d4c688689503439c58" HandleID="k8s-pod-network.46edbdfd0413c9ada98cb42ab4127e3c74aff1738e0c45d4c688689503439c58" Workload="10.0.0.139-k8s-coredns--66bc5c9577--4fkd7-eth0" Apr 13 23:32:01.918493 containerd[1460]: 2026-04-13 23:32:01.905 [INFO][4969] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 23:32:01.918493 containerd[1460]: 2026-04-13 23:32:01.911 [INFO][4960] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="46edbdfd0413c9ada98cb42ab4127e3c74aff1738e0c45d4c688689503439c58" Apr 13 23:32:01.920245 containerd[1460]: time="2026-04-13T23:32:01.919444451Z" level=info msg="TearDown network for sandbox \"46edbdfd0413c9ada98cb42ab4127e3c74aff1738e0c45d4c688689503439c58\" successfully" Apr 13 23:32:01.920245 containerd[1460]: time="2026-04-13T23:32:01.919493970Z" level=info msg="StopPodSandbox for \"46edbdfd0413c9ada98cb42ab4127e3c74aff1738e0c45d4c688689503439c58\" returns successfully" Apr 13 23:32:01.921065 containerd[1460]: time="2026-04-13T23:32:01.920436614Z" level=info msg="RemovePodSandbox for \"46edbdfd0413c9ada98cb42ab4127e3c74aff1738e0c45d4c688689503439c58\"" Apr 13 23:32:01.922025 containerd[1460]: time="2026-04-13T23:32:01.921374052Z" level=info msg="Forcibly stopping sandbox \"46edbdfd0413c9ada98cb42ab4127e3c74aff1738e0c45d4c688689503439c58\"" Apr 13 23:32:02.206310 kubelet[1759]: E0413 23:32:02.206075 1759 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 13 23:32:02.857691 containerd[1460]: 2026-04-13 23:32:02.530 [WARNING][4987] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="46edbdfd0413c9ada98cb42ab4127e3c74aff1738e0c45d4c688689503439c58" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.139-k8s-coredns--66bc5c9577--4fkd7-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"16b5b403-644e-4ee0-ad2f-a49ad057b5c5", ResourceVersion:"1444", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 23, 28, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.139", ContainerID:"6352321dd11e09cc5bbbd91001b760f0cd254743587b60171ee0c890d8ae3712", Pod:"coredns-66bc5c9577-4fkd7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.100.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7cb6ec363e7", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 23:32:02.857691 containerd[1460]: 2026-04-13 23:32:02.530 [INFO][4987] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="46edbdfd0413c9ada98cb42ab4127e3c74aff1738e0c45d4c688689503439c58" Apr 13 23:32:02.857691 containerd[1460]: 2026-04-13 23:32:02.530 [INFO][4987] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="46edbdfd0413c9ada98cb42ab4127e3c74aff1738e0c45d4c688689503439c58" iface="eth0" netns="" Apr 13 23:32:02.857691 containerd[1460]: 2026-04-13 23:32:02.530 [INFO][4987] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="46edbdfd0413c9ada98cb42ab4127e3c74aff1738e0c45d4c688689503439c58" Apr 13 23:32:02.857691 containerd[1460]: 2026-04-13 23:32:02.530 [INFO][4987] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="46edbdfd0413c9ada98cb42ab4127e3c74aff1738e0c45d4c688689503439c58" Apr 13 23:32:02.857691 containerd[1460]: 2026-04-13 23:32:02.585 [INFO][4998] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="46edbdfd0413c9ada98cb42ab4127e3c74aff1738e0c45d4c688689503439c58" HandleID="k8s-pod-network.46edbdfd0413c9ada98cb42ab4127e3c74aff1738e0c45d4c688689503439c58" Workload="10.0.0.139-k8s-coredns--66bc5c9577--4fkd7-eth0" Apr 13 23:32:02.857691 containerd[1460]: 2026-04-13 23:32:02.585 [INFO][4998] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 23:32:02.857691 containerd[1460]: 2026-04-13 23:32:02.585 [INFO][4998] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 23:32:02.857691 containerd[1460]: 2026-04-13 23:32:02.755 [WARNING][4998] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="46edbdfd0413c9ada98cb42ab4127e3c74aff1738e0c45d4c688689503439c58" HandleID="k8s-pod-network.46edbdfd0413c9ada98cb42ab4127e3c74aff1738e0c45d4c688689503439c58" Workload="10.0.0.139-k8s-coredns--66bc5c9577--4fkd7-eth0" Apr 13 23:32:02.857691 containerd[1460]: 2026-04-13 23:32:02.755 [INFO][4998] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="46edbdfd0413c9ada98cb42ab4127e3c74aff1738e0c45d4c688689503439c58" HandleID="k8s-pod-network.46edbdfd0413c9ada98cb42ab4127e3c74aff1738e0c45d4c688689503439c58" Workload="10.0.0.139-k8s-coredns--66bc5c9577--4fkd7-eth0" Apr 13 23:32:02.857691 containerd[1460]: 2026-04-13 23:32:02.823 [INFO][4998] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 23:32:02.857691 containerd[1460]: 2026-04-13 23:32:02.830 [INFO][4987] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="46edbdfd0413c9ada98cb42ab4127e3c74aff1738e0c45d4c688689503439c58" Apr 13 23:32:02.857691 containerd[1460]: time="2026-04-13T23:32:02.855726847Z" level=info msg="TearDown network for sandbox \"46edbdfd0413c9ada98cb42ab4127e3c74aff1738e0c45d4c688689503439c58\" successfully" Apr 13 23:32:02.863402 containerd[1460]: time="2026-04-13T23:32:02.863156842Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"46edbdfd0413c9ada98cb42ab4127e3c74aff1738e0c45d4c688689503439c58\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 13 23:32:02.863402 containerd[1460]: time="2026-04-13T23:32:02.863270207Z" level=info msg="RemovePodSandbox \"46edbdfd0413c9ada98cb42ab4127e3c74aff1738e0c45d4c688689503439c58\" returns successfully" Apr 13 23:32:02.876230 containerd[1460]: time="2026-04-13T23:32:02.873709015Z" level=info msg="StopPodSandbox for \"84fbc200218f9266f64e34e744c0b040ec537cef82f14060645fe5bf12fdb3e3\"" Apr 13 23:32:03.210992 kubelet[1759]: E0413 23:32:03.210617 1759 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 13 23:32:03.561747 containerd[1460]: 2026-04-13 23:32:03.188 [WARNING][5016] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="84fbc200218f9266f64e34e744c0b040ec537cef82f14060645fe5bf12fdb3e3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.139-k8s-calico--kube--controllers--776667d67--kwkmw-eth0", GenerateName:"calico-kube-controllers-776667d67-", Namespace:"calico-system", SelfLink:"", UID:"9ea06102-a332-4253-8eb9-bb324b77c2d2", ResourceVersion:"1495", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 23, 31, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"776667d67", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.139", ContainerID:"16e9de07e058a3d848e8bfc15f6c961e97eefd4802cee5a648422a00b5ee4f1f", Pod:"calico-kube-controllers-776667d67-kwkmw", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.100.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali3012d634e5a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 23:32:03.561747 containerd[1460]: 2026-04-13 23:32:03.188 [INFO][5016] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="84fbc200218f9266f64e34e744c0b040ec537cef82f14060645fe5bf12fdb3e3" Apr 13 23:32:03.561747 containerd[1460]: 2026-04-13 23:32:03.188 [INFO][5016] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="84fbc200218f9266f64e34e744c0b040ec537cef82f14060645fe5bf12fdb3e3" iface="eth0" netns="" Apr 13 23:32:03.561747 containerd[1460]: 2026-04-13 23:32:03.188 [INFO][5016] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="84fbc200218f9266f64e34e744c0b040ec537cef82f14060645fe5bf12fdb3e3" Apr 13 23:32:03.561747 containerd[1460]: 2026-04-13 23:32:03.188 [INFO][5016] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="84fbc200218f9266f64e34e744c0b040ec537cef82f14060645fe5bf12fdb3e3" Apr 13 23:32:03.561747 containerd[1460]: 2026-04-13 23:32:03.298 [INFO][5024] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="84fbc200218f9266f64e34e744c0b040ec537cef82f14060645fe5bf12fdb3e3" HandleID="k8s-pod-network.84fbc200218f9266f64e34e744c0b040ec537cef82f14060645fe5bf12fdb3e3" Workload="10.0.0.139-k8s-calico--kube--controllers--776667d67--kwkmw-eth0" Apr 13 23:32:03.561747 containerd[1460]: 2026-04-13 23:32:03.299 [INFO][5024] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 23:32:03.561747 containerd[1460]: 2026-04-13 23:32:03.299 [INFO][5024] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 23:32:03.561747 containerd[1460]: 2026-04-13 23:32:03.504 [WARNING][5024] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="84fbc200218f9266f64e34e744c0b040ec537cef82f14060645fe5bf12fdb3e3" HandleID="k8s-pod-network.84fbc200218f9266f64e34e744c0b040ec537cef82f14060645fe5bf12fdb3e3" Workload="10.0.0.139-k8s-calico--kube--controllers--776667d67--kwkmw-eth0" Apr 13 23:32:03.561747 containerd[1460]: 2026-04-13 23:32:03.504 [INFO][5024] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="84fbc200218f9266f64e34e744c0b040ec537cef82f14060645fe5bf12fdb3e3" HandleID="k8s-pod-network.84fbc200218f9266f64e34e744c0b040ec537cef82f14060645fe5bf12fdb3e3" Workload="10.0.0.139-k8s-calico--kube--controllers--776667d67--kwkmw-eth0" Apr 13 23:32:03.561747 containerd[1460]: 2026-04-13 23:32:03.556 [INFO][5024] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 23:32:03.561747 containerd[1460]: 2026-04-13 23:32:03.560 [INFO][5016] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="84fbc200218f9266f64e34e744c0b040ec537cef82f14060645fe5bf12fdb3e3" Apr 13 23:32:03.562440 containerd[1460]: time="2026-04-13T23:32:03.562036980Z" level=info msg="TearDown network for sandbox \"84fbc200218f9266f64e34e744c0b040ec537cef82f14060645fe5bf12fdb3e3\" successfully" Apr 13 23:32:03.562440 containerd[1460]: time="2026-04-13T23:32:03.562080757Z" level=info msg="StopPodSandbox for \"84fbc200218f9266f64e34e744c0b040ec537cef82f14060645fe5bf12fdb3e3\" returns successfully" Apr 13 23:32:03.563290 containerd[1460]: time="2026-04-13T23:32:03.563234149Z" level=info msg="RemovePodSandbox for \"84fbc200218f9266f64e34e744c0b040ec537cef82f14060645fe5bf12fdb3e3\"" Apr 13 23:32:03.563373 containerd[1460]: time="2026-04-13T23:32:03.563296543Z" level=info msg="Forcibly stopping sandbox \"84fbc200218f9266f64e34e744c0b040ec537cef82f14060645fe5bf12fdb3e3\"" Apr 13 23:32:04.211700 kubelet[1759]: E0413 23:32:04.211603 1759 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 13 23:32:04.739517 containerd[1460]: 2026-04-13 23:32:03.879 [WARNING][5042] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="84fbc200218f9266f64e34e744c0b040ec537cef82f14060645fe5bf12fdb3e3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.139-k8s-calico--kube--controllers--776667d67--kwkmw-eth0", GenerateName:"calico-kube-controllers-776667d67-", Namespace:"calico-system", SelfLink:"", UID:"9ea06102-a332-4253-8eb9-bb324b77c2d2", ResourceVersion:"1495", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 23, 31, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"776667d67", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.139", ContainerID:"16e9de07e058a3d848e8bfc15f6c961e97eefd4802cee5a648422a00b5ee4f1f", Pod:"calico-kube-controllers-776667d67-kwkmw", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.100.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali3012d634e5a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 23:32:04.739517 containerd[1460]: 2026-04-13 23:32:03.879 [INFO][5042] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="84fbc200218f9266f64e34e744c0b040ec537cef82f14060645fe5bf12fdb3e3" Apr 13 23:32:04.739517 containerd[1460]: 2026-04-13 23:32:03.879 [INFO][5042] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="84fbc200218f9266f64e34e744c0b040ec537cef82f14060645fe5bf12fdb3e3" iface="eth0" netns="" Apr 13 23:32:04.739517 containerd[1460]: 2026-04-13 23:32:03.879 [INFO][5042] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="84fbc200218f9266f64e34e744c0b040ec537cef82f14060645fe5bf12fdb3e3" Apr 13 23:32:04.739517 containerd[1460]: 2026-04-13 23:32:03.879 [INFO][5042] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="84fbc200218f9266f64e34e744c0b040ec537cef82f14060645fe5bf12fdb3e3" Apr 13 23:32:04.739517 containerd[1460]: 2026-04-13 23:32:03.997 [INFO][5050] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="84fbc200218f9266f64e34e744c0b040ec537cef82f14060645fe5bf12fdb3e3" HandleID="k8s-pod-network.84fbc200218f9266f64e34e744c0b040ec537cef82f14060645fe5bf12fdb3e3" Workload="10.0.0.139-k8s-calico--kube--controllers--776667d67--kwkmw-eth0" Apr 13 23:32:04.739517 containerd[1460]: 2026-04-13 23:32:03.999 [INFO][5050] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 23:32:04.739517 containerd[1460]: 2026-04-13 23:32:04.000 [INFO][5050] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 23:32:04.739517 containerd[1460]: 2026-04-13 23:32:04.596 [WARNING][5050] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="84fbc200218f9266f64e34e744c0b040ec537cef82f14060645fe5bf12fdb3e3" HandleID="k8s-pod-network.84fbc200218f9266f64e34e744c0b040ec537cef82f14060645fe5bf12fdb3e3" Workload="10.0.0.139-k8s-calico--kube--controllers--776667d67--kwkmw-eth0" Apr 13 23:32:04.739517 containerd[1460]: 2026-04-13 23:32:04.598 [INFO][5050] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="84fbc200218f9266f64e34e744c0b040ec537cef82f14060645fe5bf12fdb3e3" HandleID="k8s-pod-network.84fbc200218f9266f64e34e744c0b040ec537cef82f14060645fe5bf12fdb3e3" Workload="10.0.0.139-k8s-calico--kube--controllers--776667d67--kwkmw-eth0" Apr 13 23:32:04.739517 containerd[1460]: 2026-04-13 23:32:04.688 [INFO][5050] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 23:32:04.739517 containerd[1460]: 2026-04-13 23:32:04.716 [INFO][5042] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="84fbc200218f9266f64e34e744c0b040ec537cef82f14060645fe5bf12fdb3e3" Apr 13 23:32:04.743230 containerd[1460]: time="2026-04-13T23:32:04.740903735Z" level=info msg="TearDown network for sandbox \"84fbc200218f9266f64e34e744c0b040ec537cef82f14060645fe5bf12fdb3e3\" successfully" Apr 13 23:32:04.750126 containerd[1460]: time="2026-04-13T23:32:04.750021763Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"84fbc200218f9266f64e34e744c0b040ec537cef82f14060645fe5bf12fdb3e3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 13 23:32:04.750126 containerd[1460]: time="2026-04-13T23:32:04.750151642Z" level=info msg="RemovePodSandbox \"84fbc200218f9266f64e34e744c0b040ec537cef82f14060645fe5bf12fdb3e3\" returns successfully" Apr 13 23:32:04.752359 containerd[1460]: time="2026-04-13T23:32:04.750748900Z" level=info msg="StopPodSandbox for \"4ea0757ed1eb351f83ab325bbc604c63a3307769691e69f6f9ae73a05f8d9ed4\"" Apr 13 23:32:05.215264 kubelet[1759]: E0413 23:32:05.212961 1759 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 13 23:32:05.403351 containerd[1460]: 2026-04-13 23:32:05.072 [WARNING][5067] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="4ea0757ed1eb351f83ab325bbc604c63a3307769691e69f6f9ae73a05f8d9ed4" WorkloadEndpoint="10.0.0.139-k8s-whisker--775c5f9b46--w497s-eth0" Apr 13 23:32:05.403351 containerd[1460]: 2026-04-13 23:32:05.073 [INFO][5067] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="4ea0757ed1eb351f83ab325bbc604c63a3307769691e69f6f9ae73a05f8d9ed4" Apr 13 23:32:05.403351 containerd[1460]: 2026-04-13 23:32:05.073 [INFO][5067] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4ea0757ed1eb351f83ab325bbc604c63a3307769691e69f6f9ae73a05f8d9ed4" iface="eth0" netns="" Apr 13 23:32:05.403351 containerd[1460]: 2026-04-13 23:32:05.073 [INFO][5067] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="4ea0757ed1eb351f83ab325bbc604c63a3307769691e69f6f9ae73a05f8d9ed4" Apr 13 23:32:05.403351 containerd[1460]: 2026-04-13 23:32:05.073 [INFO][5067] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="4ea0757ed1eb351f83ab325bbc604c63a3307769691e69f6f9ae73a05f8d9ed4" Apr 13 23:32:05.403351 containerd[1460]: 2026-04-13 23:32:05.267 [INFO][5075] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="4ea0757ed1eb351f83ab325bbc604c63a3307769691e69f6f9ae73a05f8d9ed4" HandleID="k8s-pod-network.4ea0757ed1eb351f83ab325bbc604c63a3307769691e69f6f9ae73a05f8d9ed4" Workload="10.0.0.139-k8s-whisker--775c5f9b46--w497s-eth0" Apr 13 23:32:05.403351 containerd[1460]: 2026-04-13 23:32:05.267 [INFO][5075] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 23:32:05.403351 containerd[1460]: 2026-04-13 23:32:05.267 [INFO][5075] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 23:32:05.403351 containerd[1460]: 2026-04-13 23:32:05.344 [WARNING][5075] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="4ea0757ed1eb351f83ab325bbc604c63a3307769691e69f6f9ae73a05f8d9ed4" HandleID="k8s-pod-network.4ea0757ed1eb351f83ab325bbc604c63a3307769691e69f6f9ae73a05f8d9ed4" Workload="10.0.0.139-k8s-whisker--775c5f9b46--w497s-eth0" Apr 13 23:32:05.403351 containerd[1460]: 2026-04-13 23:32:05.344 [INFO][5075] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="4ea0757ed1eb351f83ab325bbc604c63a3307769691e69f6f9ae73a05f8d9ed4" HandleID="k8s-pod-network.4ea0757ed1eb351f83ab325bbc604c63a3307769691e69f6f9ae73a05f8d9ed4" Workload="10.0.0.139-k8s-whisker--775c5f9b46--w497s-eth0" Apr 13 23:32:05.403351 containerd[1460]: 2026-04-13 23:32:05.389 [INFO][5075] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 23:32:05.403351 containerd[1460]: 2026-04-13 23:32:05.394 [INFO][5067] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="4ea0757ed1eb351f83ab325bbc604c63a3307769691e69f6f9ae73a05f8d9ed4" Apr 13 23:32:05.403908 containerd[1460]: time="2026-04-13T23:32:05.403496286Z" level=info msg="TearDown network for sandbox \"4ea0757ed1eb351f83ab325bbc604c63a3307769691e69f6f9ae73a05f8d9ed4\" successfully" Apr 13 23:32:05.403908 containerd[1460]: time="2026-04-13T23:32:05.403595588Z" level=info msg="StopPodSandbox for \"4ea0757ed1eb351f83ab325bbc604c63a3307769691e69f6f9ae73a05f8d9ed4\" returns successfully" Apr 13 23:32:05.407577 containerd[1460]: time="2026-04-13T23:32:05.407516109Z" level=info msg="RemovePodSandbox for \"4ea0757ed1eb351f83ab325bbc604c63a3307769691e69f6f9ae73a05f8d9ed4\"" Apr 13 23:32:05.407577 containerd[1460]: time="2026-04-13T23:32:05.407576367Z" level=info msg="Forcibly stopping sandbox \"4ea0757ed1eb351f83ab325bbc604c63a3307769691e69f6f9ae73a05f8d9ed4\"" Apr 13 23:32:06.023268 containerd[1460]: 2026-04-13 23:32:05.759 [WARNING][5099] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="4ea0757ed1eb351f83ab325bbc604c63a3307769691e69f6f9ae73a05f8d9ed4" WorkloadEndpoint="10.0.0.139-k8s-whisker--775c5f9b46--w497s-eth0" Apr 13 23:32:06.023268 containerd[1460]: 2026-04-13 23:32:05.759 [INFO][5099] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="4ea0757ed1eb351f83ab325bbc604c63a3307769691e69f6f9ae73a05f8d9ed4" Apr 13 23:32:06.023268 containerd[1460]: 2026-04-13 23:32:05.759 [INFO][5099] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4ea0757ed1eb351f83ab325bbc604c63a3307769691e69f6f9ae73a05f8d9ed4" iface="eth0" netns="" Apr 13 23:32:06.023268 containerd[1460]: 2026-04-13 23:32:05.759 [INFO][5099] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="4ea0757ed1eb351f83ab325bbc604c63a3307769691e69f6f9ae73a05f8d9ed4" Apr 13 23:32:06.023268 containerd[1460]: 2026-04-13 23:32:05.760 [INFO][5099] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="4ea0757ed1eb351f83ab325bbc604c63a3307769691e69f6f9ae73a05f8d9ed4" Apr 13 23:32:06.023268 containerd[1460]: 2026-04-13 23:32:05.834 [INFO][5117] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="4ea0757ed1eb351f83ab325bbc604c63a3307769691e69f6f9ae73a05f8d9ed4" HandleID="k8s-pod-network.4ea0757ed1eb351f83ab325bbc604c63a3307769691e69f6f9ae73a05f8d9ed4" Workload="10.0.0.139-k8s-whisker--775c5f9b46--w497s-eth0" Apr 13 23:32:06.023268 containerd[1460]: 2026-04-13 23:32:05.834 [INFO][5117] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 23:32:06.023268 containerd[1460]: 2026-04-13 23:32:05.834 [INFO][5117] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 23:32:06.023268 containerd[1460]: 2026-04-13 23:32:05.936 [WARNING][5117] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="4ea0757ed1eb351f83ab325bbc604c63a3307769691e69f6f9ae73a05f8d9ed4" HandleID="k8s-pod-network.4ea0757ed1eb351f83ab325bbc604c63a3307769691e69f6f9ae73a05f8d9ed4" Workload="10.0.0.139-k8s-whisker--775c5f9b46--w497s-eth0" Apr 13 23:32:06.023268 containerd[1460]: 2026-04-13 23:32:05.942 [INFO][5117] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="4ea0757ed1eb351f83ab325bbc604c63a3307769691e69f6f9ae73a05f8d9ed4" HandleID="k8s-pod-network.4ea0757ed1eb351f83ab325bbc604c63a3307769691e69f6f9ae73a05f8d9ed4" Workload="10.0.0.139-k8s-whisker--775c5f9b46--w497s-eth0" Apr 13 23:32:06.023268 containerd[1460]: 2026-04-13 23:32:06.012 [INFO][5117] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 23:32:06.023268 containerd[1460]: 2026-04-13 23:32:06.021 [INFO][5099] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="4ea0757ed1eb351f83ab325bbc604c63a3307769691e69f6f9ae73a05f8d9ed4" Apr 13 23:32:06.024109 containerd[1460]: time="2026-04-13T23:32:06.023313044Z" level=info msg="TearDown network for sandbox \"4ea0757ed1eb351f83ab325bbc604c63a3307769691e69f6f9ae73a05f8d9ed4\" successfully" Apr 13 23:32:06.027739 containerd[1460]: time="2026-04-13T23:32:06.027676838Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4ea0757ed1eb351f83ab325bbc604c63a3307769691e69f6f9ae73a05f8d9ed4\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 13 23:32:06.029826 containerd[1460]: time="2026-04-13T23:32:06.027765912Z" level=info msg="RemovePodSandbox \"4ea0757ed1eb351f83ab325bbc604c63a3307769691e69f6f9ae73a05f8d9ed4\" returns successfully" Apr 13 23:32:06.029826 containerd[1460]: time="2026-04-13T23:32:06.028626382Z" level=info msg="StopPodSandbox for \"85433dd274c9b8979fbc01f89358003fd1f330040c5457c5aa83cb6f050ffea3\"" Apr 13 23:32:06.216201 kubelet[1759]: E0413 23:32:06.215352 1759 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 13 23:32:07.010323 containerd[1460]: 2026-04-13 23:32:06.494 [WARNING][5135] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="85433dd274c9b8979fbc01f89358003fd1f330040c5457c5aa83cb6f050ffea3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.139-k8s-csi--node--driver--r2cfg-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"db378709-9093-42e0-99e6-ba9beb70b60d", ResourceVersion:"1530", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 23, 31, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"98cbb5577", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.139", ContainerID:"cff5b5b68ecf5f7f6a6a897742a6b1774250c6dc75d26fd4ba473c85625738e1", Pod:"csi-node-driver-r2cfg", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.100.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calif175065fb4f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 23:32:07.010323 containerd[1460]: 2026-04-13 23:32:06.495 [INFO][5135] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="85433dd274c9b8979fbc01f89358003fd1f330040c5457c5aa83cb6f050ffea3" Apr 13 23:32:07.010323 containerd[1460]: 2026-04-13 23:32:06.495 [INFO][5135] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="85433dd274c9b8979fbc01f89358003fd1f330040c5457c5aa83cb6f050ffea3" iface="eth0" netns="" Apr 13 23:32:07.010323 containerd[1460]: 2026-04-13 23:32:06.495 [INFO][5135] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="85433dd274c9b8979fbc01f89358003fd1f330040c5457c5aa83cb6f050ffea3" Apr 13 23:32:07.010323 containerd[1460]: 2026-04-13 23:32:06.495 [INFO][5135] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="85433dd274c9b8979fbc01f89358003fd1f330040c5457c5aa83cb6f050ffea3" Apr 13 23:32:07.010323 containerd[1460]: 2026-04-13 23:32:06.581 [INFO][5143] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="85433dd274c9b8979fbc01f89358003fd1f330040c5457c5aa83cb6f050ffea3" HandleID="k8s-pod-network.85433dd274c9b8979fbc01f89358003fd1f330040c5457c5aa83cb6f050ffea3" Workload="10.0.0.139-k8s-csi--node--driver--r2cfg-eth0" Apr 13 23:32:07.010323 containerd[1460]: 2026-04-13 23:32:06.581 [INFO][5143] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 23:32:07.010323 containerd[1460]: 2026-04-13 23:32:06.582 [INFO][5143] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 23:32:07.010323 containerd[1460]: 2026-04-13 23:32:06.848 [WARNING][5143] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="85433dd274c9b8979fbc01f89358003fd1f330040c5457c5aa83cb6f050ffea3" HandleID="k8s-pod-network.85433dd274c9b8979fbc01f89358003fd1f330040c5457c5aa83cb6f050ffea3" Workload="10.0.0.139-k8s-csi--node--driver--r2cfg-eth0" Apr 13 23:32:07.010323 containerd[1460]: 2026-04-13 23:32:06.848 [INFO][5143] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="85433dd274c9b8979fbc01f89358003fd1f330040c5457c5aa83cb6f050ffea3" HandleID="k8s-pod-network.85433dd274c9b8979fbc01f89358003fd1f330040c5457c5aa83cb6f050ffea3" Workload="10.0.0.139-k8s-csi--node--driver--r2cfg-eth0" Apr 13 23:32:07.010323 containerd[1460]: 2026-04-13 23:32:06.997 [INFO][5143] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 23:32:07.010323 containerd[1460]: 2026-04-13 23:32:07.004 [INFO][5135] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="85433dd274c9b8979fbc01f89358003fd1f330040c5457c5aa83cb6f050ffea3" Apr 13 23:32:07.015284 containerd[1460]: time="2026-04-13T23:32:07.010365372Z" level=info msg="TearDown network for sandbox \"85433dd274c9b8979fbc01f89358003fd1f330040c5457c5aa83cb6f050ffea3\" successfully" Apr 13 23:32:07.015284 containerd[1460]: time="2026-04-13T23:32:07.010399369Z" level=info msg="StopPodSandbox for \"85433dd274c9b8979fbc01f89358003fd1f330040c5457c5aa83cb6f050ffea3\" returns successfully" Apr 13 23:32:07.015284 containerd[1460]: time="2026-04-13T23:32:07.014019738Z" level=info msg="RemovePodSandbox for \"85433dd274c9b8979fbc01f89358003fd1f330040c5457c5aa83cb6f050ffea3\"" Apr 13 23:32:07.015284 containerd[1460]: time="2026-04-13T23:32:07.014069995Z" level=info msg="Forcibly stopping sandbox \"85433dd274c9b8979fbc01f89358003fd1f330040c5457c5aa83cb6f050ffea3\"" Apr 13 23:32:07.216266 kubelet[1759]: E0413 23:32:07.216113 1759 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 13 23:32:07.727245 containerd[1460]: 2026-04-13 23:32:07.456 [WARNING][5161] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="85433dd274c9b8979fbc01f89358003fd1f330040c5457c5aa83cb6f050ffea3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.139-k8s-csi--node--driver--r2cfg-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"db378709-9093-42e0-99e6-ba9beb70b60d", ResourceVersion:"1530", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 23, 31, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"98cbb5577", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.139", ContainerID:"cff5b5b68ecf5f7f6a6a897742a6b1774250c6dc75d26fd4ba473c85625738e1", Pod:"csi-node-driver-r2cfg", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.100.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calif175065fb4f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 23:32:07.727245 containerd[1460]: 2026-04-13 23:32:07.456 [INFO][5161] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="85433dd274c9b8979fbc01f89358003fd1f330040c5457c5aa83cb6f050ffea3" Apr 13 23:32:07.727245 containerd[1460]: 2026-04-13 23:32:07.456 [INFO][5161] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="85433dd274c9b8979fbc01f89358003fd1f330040c5457c5aa83cb6f050ffea3" iface="eth0" netns="" Apr 13 23:32:07.727245 containerd[1460]: 2026-04-13 23:32:07.456 [INFO][5161] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="85433dd274c9b8979fbc01f89358003fd1f330040c5457c5aa83cb6f050ffea3" Apr 13 23:32:07.727245 containerd[1460]: 2026-04-13 23:32:07.456 [INFO][5161] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="85433dd274c9b8979fbc01f89358003fd1f330040c5457c5aa83cb6f050ffea3" Apr 13 23:32:07.727245 containerd[1460]: 2026-04-13 23:32:07.551 [INFO][5169] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="85433dd274c9b8979fbc01f89358003fd1f330040c5457c5aa83cb6f050ffea3" HandleID="k8s-pod-network.85433dd274c9b8979fbc01f89358003fd1f330040c5457c5aa83cb6f050ffea3" Workload="10.0.0.139-k8s-csi--node--driver--r2cfg-eth0" Apr 13 23:32:07.727245 containerd[1460]: 2026-04-13 23:32:07.552 [INFO][5169] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 23:32:07.727245 containerd[1460]: 2026-04-13 23:32:07.553 [INFO][5169] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 23:32:07.727245 containerd[1460]: 2026-04-13 23:32:07.637 [WARNING][5169] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="85433dd274c9b8979fbc01f89358003fd1f330040c5457c5aa83cb6f050ffea3" HandleID="k8s-pod-network.85433dd274c9b8979fbc01f89358003fd1f330040c5457c5aa83cb6f050ffea3" Workload="10.0.0.139-k8s-csi--node--driver--r2cfg-eth0" Apr 13 23:32:07.727245 containerd[1460]: 2026-04-13 23:32:07.642 [INFO][5169] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="85433dd274c9b8979fbc01f89358003fd1f330040c5457c5aa83cb6f050ffea3" HandleID="k8s-pod-network.85433dd274c9b8979fbc01f89358003fd1f330040c5457c5aa83cb6f050ffea3" Workload="10.0.0.139-k8s-csi--node--driver--r2cfg-eth0" Apr 13 23:32:07.727245 containerd[1460]: 2026-04-13 23:32:07.704 [INFO][5169] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 23:32:07.727245 containerd[1460]: 2026-04-13 23:32:07.715 [INFO][5161] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="85433dd274c9b8979fbc01f89358003fd1f330040c5457c5aa83cb6f050ffea3" Apr 13 23:32:07.727245 containerd[1460]: time="2026-04-13T23:32:07.724222879Z" level=info msg="TearDown network for sandbox \"85433dd274c9b8979fbc01f89358003fd1f330040c5457c5aa83cb6f050ffea3\" successfully" Apr 13 23:32:07.731682 containerd[1460]: time="2026-04-13T23:32:07.731524719Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"85433dd274c9b8979fbc01f89358003fd1f330040c5457c5aa83cb6f050ffea3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 13 23:32:07.731682 containerd[1460]: time="2026-04-13T23:32:07.731617485Z" level=info msg="RemovePodSandbox \"85433dd274c9b8979fbc01f89358003fd1f330040c5457c5aa83cb6f050ffea3\" returns successfully" Apr 13 23:32:07.734258 containerd[1460]: time="2026-04-13T23:32:07.732383941Z" level=info msg="StopPodSandbox for \"013acc9c1696c4ebea5ba2b0ae5d3b80fef2a92f86fde53a123fb5fe27a1ebcf\"" Apr 13 23:32:08.216962 kubelet[1759]: E0413 23:32:08.216761 1759 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 13 23:32:08.299784 containerd[1460]: 2026-04-13 23:32:08.006 [WARNING][5188] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="013acc9c1696c4ebea5ba2b0ae5d3b80fef2a92f86fde53a123fb5fe27a1ebcf" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.139-k8s-goldmane--cccfbd5cf--6h5xs-eth0", GenerateName:"goldmane-cccfbd5cf-", Namespace:"calico-system", SelfLink:"", UID:"6e08b664-aec1-4d1f-aaf5-d1070a1bd37d", ResourceVersion:"1574", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 23, 30, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"cccfbd5cf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.139", ContainerID:"b01d76d6cafe4775b4797136e5e1f2666519caad88eef4a25e1207a379513ca0", Pod:"goldmane-cccfbd5cf-6h5xs", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.100.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali50a480e20a9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 23:32:08.299784 containerd[1460]: 2026-04-13 23:32:08.009 [INFO][5188] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="013acc9c1696c4ebea5ba2b0ae5d3b80fef2a92f86fde53a123fb5fe27a1ebcf" Apr 13 23:32:08.299784 containerd[1460]: 2026-04-13 23:32:08.009 [INFO][5188] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="013acc9c1696c4ebea5ba2b0ae5d3b80fef2a92f86fde53a123fb5fe27a1ebcf" iface="eth0" netns="" Apr 13 23:32:08.299784 containerd[1460]: 2026-04-13 23:32:08.009 [INFO][5188] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="013acc9c1696c4ebea5ba2b0ae5d3b80fef2a92f86fde53a123fb5fe27a1ebcf" Apr 13 23:32:08.299784 containerd[1460]: 2026-04-13 23:32:08.009 [INFO][5188] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="013acc9c1696c4ebea5ba2b0ae5d3b80fef2a92f86fde53a123fb5fe27a1ebcf" Apr 13 23:32:08.299784 containerd[1460]: 2026-04-13 23:32:08.065 [INFO][5196] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="013acc9c1696c4ebea5ba2b0ae5d3b80fef2a92f86fde53a123fb5fe27a1ebcf" HandleID="k8s-pod-network.013acc9c1696c4ebea5ba2b0ae5d3b80fef2a92f86fde53a123fb5fe27a1ebcf" Workload="10.0.0.139-k8s-goldmane--cccfbd5cf--6h5xs-eth0" Apr 13 23:32:08.299784 containerd[1460]: 2026-04-13 23:32:08.067 [INFO][5196] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 23:32:08.299784 containerd[1460]: 2026-04-13 23:32:08.067 [INFO][5196] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 23:32:08.299784 containerd[1460]: 2026-04-13 23:32:08.184 [WARNING][5196] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="013acc9c1696c4ebea5ba2b0ae5d3b80fef2a92f86fde53a123fb5fe27a1ebcf" HandleID="k8s-pod-network.013acc9c1696c4ebea5ba2b0ae5d3b80fef2a92f86fde53a123fb5fe27a1ebcf" Workload="10.0.0.139-k8s-goldmane--cccfbd5cf--6h5xs-eth0" Apr 13 23:32:08.299784 containerd[1460]: 2026-04-13 23:32:08.185 [INFO][5196] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="013acc9c1696c4ebea5ba2b0ae5d3b80fef2a92f86fde53a123fb5fe27a1ebcf" HandleID="k8s-pod-network.013acc9c1696c4ebea5ba2b0ae5d3b80fef2a92f86fde53a123fb5fe27a1ebcf" Workload="10.0.0.139-k8s-goldmane--cccfbd5cf--6h5xs-eth0" Apr 13 23:32:08.299784 containerd[1460]: 2026-04-13 23:32:08.247 [INFO][5196] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 23:32:08.299784 containerd[1460]: 2026-04-13 23:32:08.298 [INFO][5188] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="013acc9c1696c4ebea5ba2b0ae5d3b80fef2a92f86fde53a123fb5fe27a1ebcf" Apr 13 23:32:08.300781 containerd[1460]: time="2026-04-13T23:32:08.299879871Z" level=info msg="TearDown network for sandbox \"013acc9c1696c4ebea5ba2b0ae5d3b80fef2a92f86fde53a123fb5fe27a1ebcf\" successfully" Apr 13 23:32:08.300781 containerd[1460]: time="2026-04-13T23:32:08.299915928Z" level=info msg="StopPodSandbox for \"013acc9c1696c4ebea5ba2b0ae5d3b80fef2a92f86fde53a123fb5fe27a1ebcf\" returns successfully" Apr 13 23:32:08.300781 containerd[1460]: time="2026-04-13T23:32:08.300752681Z" level=info msg="RemovePodSandbox for \"013acc9c1696c4ebea5ba2b0ae5d3b80fef2a92f86fde53a123fb5fe27a1ebcf\"" Apr 13 23:32:08.302467 containerd[1460]: time="2026-04-13T23:32:08.300787168Z" level=info msg="Forcibly stopping sandbox \"013acc9c1696c4ebea5ba2b0ae5d3b80fef2a92f86fde53a123fb5fe27a1ebcf\"" Apr 13 23:32:08.889273 containerd[1460]: 2026-04-13 23:32:08.621 [WARNING][5212] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="013acc9c1696c4ebea5ba2b0ae5d3b80fef2a92f86fde53a123fb5fe27a1ebcf" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.139-k8s-goldmane--cccfbd5cf--6h5xs-eth0", GenerateName:"goldmane-cccfbd5cf-", Namespace:"calico-system", SelfLink:"", UID:"6e08b664-aec1-4d1f-aaf5-d1070a1bd37d", ResourceVersion:"1574", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 23, 30, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"cccfbd5cf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.139", ContainerID:"b01d76d6cafe4775b4797136e5e1f2666519caad88eef4a25e1207a379513ca0", Pod:"goldmane-cccfbd5cf-6h5xs", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.100.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali50a480e20a9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 23:32:08.889273 containerd[1460]: 2026-04-13 23:32:08.621 [INFO][5212] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="013acc9c1696c4ebea5ba2b0ae5d3b80fef2a92f86fde53a123fb5fe27a1ebcf" Apr 13 23:32:08.889273 containerd[1460]: 2026-04-13 23:32:08.621 [INFO][5212] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="013acc9c1696c4ebea5ba2b0ae5d3b80fef2a92f86fde53a123fb5fe27a1ebcf" iface="eth0" netns="" Apr 13 23:32:08.889273 containerd[1460]: 2026-04-13 23:32:08.621 [INFO][5212] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="013acc9c1696c4ebea5ba2b0ae5d3b80fef2a92f86fde53a123fb5fe27a1ebcf" Apr 13 23:32:08.889273 containerd[1460]: 2026-04-13 23:32:08.621 [INFO][5212] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="013acc9c1696c4ebea5ba2b0ae5d3b80fef2a92f86fde53a123fb5fe27a1ebcf" Apr 13 23:32:08.889273 containerd[1460]: 2026-04-13 23:32:08.690 [INFO][5221] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="013acc9c1696c4ebea5ba2b0ae5d3b80fef2a92f86fde53a123fb5fe27a1ebcf" HandleID="k8s-pod-network.013acc9c1696c4ebea5ba2b0ae5d3b80fef2a92f86fde53a123fb5fe27a1ebcf" Workload="10.0.0.139-k8s-goldmane--cccfbd5cf--6h5xs-eth0" Apr 13 23:32:08.889273 containerd[1460]: 2026-04-13 23:32:08.691 [INFO][5221] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 23:32:08.889273 containerd[1460]: 2026-04-13 23:32:08.691 [INFO][5221] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 23:32:08.889273 containerd[1460]: 2026-04-13 23:32:08.851 [WARNING][5221] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="013acc9c1696c4ebea5ba2b0ae5d3b80fef2a92f86fde53a123fb5fe27a1ebcf" HandleID="k8s-pod-network.013acc9c1696c4ebea5ba2b0ae5d3b80fef2a92f86fde53a123fb5fe27a1ebcf" Workload="10.0.0.139-k8s-goldmane--cccfbd5cf--6h5xs-eth0" Apr 13 23:32:08.889273 containerd[1460]: 2026-04-13 23:32:08.851 [INFO][5221] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="013acc9c1696c4ebea5ba2b0ae5d3b80fef2a92f86fde53a123fb5fe27a1ebcf" HandleID="k8s-pod-network.013acc9c1696c4ebea5ba2b0ae5d3b80fef2a92f86fde53a123fb5fe27a1ebcf" Workload="10.0.0.139-k8s-goldmane--cccfbd5cf--6h5xs-eth0" Apr 13 23:32:08.889273 containerd[1460]: 2026-04-13 23:32:08.876 [INFO][5221] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 23:32:08.889273 containerd[1460]: 2026-04-13 23:32:08.880 [INFO][5212] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="013acc9c1696c4ebea5ba2b0ae5d3b80fef2a92f86fde53a123fb5fe27a1ebcf" Apr 13 23:32:08.889273 containerd[1460]: time="2026-04-13T23:32:08.886912985Z" level=info msg="TearDown network for sandbox \"013acc9c1696c4ebea5ba2b0ae5d3b80fef2a92f86fde53a123fb5fe27a1ebcf\" successfully" Apr 13 23:32:08.910554 containerd[1460]: time="2026-04-13T23:32:08.910475373Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"013acc9c1696c4ebea5ba2b0ae5d3b80fef2a92f86fde53a123fb5fe27a1ebcf\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 13 23:32:08.913217 containerd[1460]: time="2026-04-13T23:32:08.910896394Z" level=info msg="RemovePodSandbox \"013acc9c1696c4ebea5ba2b0ae5d3b80fef2a92f86fde53a123fb5fe27a1ebcf\" returns successfully" Apr 13 23:32:09.217745 kubelet[1759]: E0413 23:32:09.217430 1759 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 13 23:32:10.218964 kubelet[1759]: E0413 23:32:10.218708 1759 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 13 23:32:11.219497 kubelet[1759]: E0413 23:32:11.219407 1759 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 13 23:32:12.221331 kubelet[1759]: E0413 23:32:12.221243 1759 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 13 23:32:13.222280 kubelet[1759]: E0413 23:32:13.222036 1759 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 13 23:32:14.223359 kubelet[1759]: E0413 23:32:14.223274 1759 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 13 23:32:15.224642 kubelet[1759]: E0413 23:32:15.224509 1759 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 13 23:32:15.978275 kubelet[1759]: E0413 23:32:15.978174 1759 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 13 23:32:16.225775 kubelet[1759]: E0413 23:32:16.225434 1759 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 13 23:32:17.228121 kubelet[1759]: E0413 23:32:17.227678 1759 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 13 23:32:18.228730 kubelet[1759]: E0413 23:32:18.228584 1759 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 13 23:32:19.232317 kubelet[1759]: E0413 23:32:19.232077 1759 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 13 23:32:20.238079 kubelet[1759]: E0413 23:32:20.237760 1759 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 13 23:32:21.238961 kubelet[1759]: E0413 23:32:21.238705 1759 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 13 23:32:22.241393 kubelet[1759]: E0413 23:32:22.241119 1759 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 13 23:32:23.242207 kubelet[1759]: E0413 23:32:23.242128 1759 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 13 23:32:24.242537 kubelet[1759]: E0413 23:32:24.242447 1759 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 13 23:32:25.243546 kubelet[1759]: E0413 23:32:25.243433 1759 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 13 23:32:26.245513 kubelet[1759]: E0413 23:32:26.245315 1759 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 13 23:32:27.246483 kubelet[1759]: E0413 23:32:27.246368 1759 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 13 23:32:28.247431 kubelet[1759]: E0413 23:32:28.247333 1759 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 13 23:32:29.263415 kubelet[1759]: E0413 23:32:29.263331 1759 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 13 23:32:30.264072 kubelet[1759]: E0413 23:32:30.263961 1759 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 13 23:32:31.159496 kubelet[1759]: E0413 23:32:31.159377 1759 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:32:31.265212 kubelet[1759]: E0413 23:32:31.264382 1759 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 13 23:32:32.267979 kubelet[1759]: E0413 23:32:32.267683 1759 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 13 23:32:33.290271 kubelet[1759]: E0413 23:32:33.286363 1759 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 13 23:32:34.287614 kubelet[1759]: E0413 23:32:34.287331 1759 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 13 23:32:35.288102 kubelet[1759]: E0413 23:32:35.288041 1759 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 13 23:32:35.978335 kubelet[1759]: E0413 23:32:35.978262 1759 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 13 23:32:36.288482 kubelet[1759]: E0413 23:32:36.288423 1759 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 13 23:32:36.846113 systemd[1]: Created slice kubepods-besteffort-pod231d8317_ddc2_4f72_b5ca_815bb1652f76.slice - libcontainer container kubepods-besteffort-pod231d8317_ddc2_4f72_b5ca_815bb1652f76.slice. Apr 13 23:32:37.014037 kubelet[1759]: I0413 23:32:37.013872 1759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rcq4k\" (UniqueName: \"kubernetes.io/projected/231d8317-ddc2-4f72-b5ca-815bb1652f76-kube-api-access-rcq4k\") pod \"nginx-deployment-bb8f74bfb-pkfzq\" (UID: \"231d8317-ddc2-4f72-b5ca-815bb1652f76\") " pod="default/nginx-deployment-bb8f74bfb-pkfzq" Apr 13 23:32:37.288975 kubelet[1759]: E0413 23:32:37.288903 1759 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 13 23:32:37.452624 containerd[1460]: time="2026-04-13T23:32:37.452250428Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-bb8f74bfb-pkfzq,Uid:231d8317-ddc2-4f72-b5ca-815bb1652f76,Namespace:default,Attempt:0,}" Apr 13 23:32:38.290040 kubelet[1759]: E0413 23:32:38.289952 1759 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 13 23:32:38.334485 systemd-networkd[1386]: cali0cedcf6e25e: Link UP Apr 13 23:32:38.334652 systemd-networkd[1386]: cali0cedcf6e25e: Gained carrier Apr 13 23:32:38.386080 containerd[1460]: 2026-04-13 23:32:37.545 [INFO][5356] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.139-k8s-nginx--deployment--bb8f74bfb--pkfzq-eth0 nginx-deployment-bb8f74bfb- default 231d8317-ddc2-4f72-b5ca-815bb1652f76 1682 0 2026-04-13 23:32:36 +0000 UTC map[app:nginx pod-template-hash:bb8f74bfb projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 10.0.0.139 nginx-deployment-bb8f74bfb-pkfzq eth0 default [] [] [kns.default ksa.default.default] cali0cedcf6e25e [] [] }} ContainerID="2e3b8973ce9ca62f27cb199065224887ce5964cf7cab675491e0aaea9e9fde3c" Namespace="default" Pod="nginx-deployment-bb8f74bfb-pkfzq" WorkloadEndpoint="10.0.0.139-k8s-nginx--deployment--bb8f74bfb--pkfzq-" Apr 13 23:32:38.386080 containerd[1460]: 2026-04-13 23:32:37.546 [INFO][5356] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2e3b8973ce9ca62f27cb199065224887ce5964cf7cab675491e0aaea9e9fde3c" Namespace="default" Pod="nginx-deployment-bb8f74bfb-pkfzq" WorkloadEndpoint="10.0.0.139-k8s-nginx--deployment--bb8f74bfb--pkfzq-eth0" Apr 13 23:32:38.386080 containerd[1460]: 2026-04-13 23:32:37.599 [INFO][5370] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2e3b8973ce9ca62f27cb199065224887ce5964cf7cab675491e0aaea9e9fde3c" HandleID="k8s-pod-network.2e3b8973ce9ca62f27cb199065224887ce5964cf7cab675491e0aaea9e9fde3c" Workload="10.0.0.139-k8s-nginx--deployment--bb8f74bfb--pkfzq-eth0" Apr 13 23:32:38.386080 containerd[1460]: 2026-04-13 23:32:37.733 [INFO][5370] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="2e3b8973ce9ca62f27cb199065224887ce5964cf7cab675491e0aaea9e9fde3c" HandleID="k8s-pod-network.2e3b8973ce9ca62f27cb199065224887ce5964cf7cab675491e0aaea9e9fde3c" Workload="10.0.0.139-k8s-nginx--deployment--bb8f74bfb--pkfzq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ef890), Attrs:map[string]string{"namespace":"default", "node":"10.0.0.139", "pod":"nginx-deployment-bb8f74bfb-pkfzq", "timestamp":"2026-04-13 23:32:37.599187374 +0000 UTC"}, Hostname:"10.0.0.139", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc00015f4a0)} Apr 13 23:32:38.386080 containerd[1460]: 2026-04-13 23:32:37.733 [INFO][5370] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 23:32:38.386080 containerd[1460]: 2026-04-13 23:32:37.734 [INFO][5370] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 23:32:38.386080 containerd[1460]: 2026-04-13 23:32:37.734 [INFO][5370] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.139' Apr 13 23:32:38.386080 containerd[1460]: 2026-04-13 23:32:37.788 [INFO][5370] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.2e3b8973ce9ca62f27cb199065224887ce5964cf7cab675491e0aaea9e9fde3c" host="10.0.0.139" Apr 13 23:32:38.386080 containerd[1460]: 2026-04-13 23:32:37.924 [INFO][5370] ipam/ipam.go 409: Looking up existing affinities for host host="10.0.0.139" Apr 13 23:32:38.386080 containerd[1460]: 2026-04-13 23:32:37.989 [INFO][5370] ipam/ipam.go 526: Trying affinity for 192.168.100.0/26 host="10.0.0.139" Apr 13 23:32:38.386080 containerd[1460]: 2026-04-13 23:32:38.091 [INFO][5370] ipam/ipam.go 160: Attempting to load block cidr=192.168.100.0/26 host="10.0.0.139" Apr 13 23:32:38.386080 containerd[1460]: 2026-04-13 23:32:38.192 [INFO][5370] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.100.0/26 host="10.0.0.139" Apr 13 23:32:38.386080 containerd[1460]: 2026-04-13 23:32:38.192 [INFO][5370] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.100.0/26 handle="k8s-pod-network.2e3b8973ce9ca62f27cb199065224887ce5964cf7cab675491e0aaea9e9fde3c" host="10.0.0.139" Apr 13 23:32:38.386080 containerd[1460]: 2026-04-13 23:32:38.239 [INFO][5370] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.2e3b8973ce9ca62f27cb199065224887ce5964cf7cab675491e0aaea9e9fde3c Apr 13 23:32:38.386080 containerd[1460]: 2026-04-13 23:32:38.258 [INFO][5370] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.100.0/26 handle="k8s-pod-network.2e3b8973ce9ca62f27cb199065224887ce5964cf7cab675491e0aaea9e9fde3c" host="10.0.0.139" Apr 13 23:32:38.386080 containerd[1460]: 2026-04-13 23:32:38.328 [INFO][5370] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.100.9/26] block=192.168.100.0/26 handle="k8s-pod-network.2e3b8973ce9ca62f27cb199065224887ce5964cf7cab675491e0aaea9e9fde3c" host="10.0.0.139" Apr 13 23:32:38.386080 containerd[1460]: 2026-04-13 23:32:38.328 [INFO][5370] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.100.9/26] handle="k8s-pod-network.2e3b8973ce9ca62f27cb199065224887ce5964cf7cab675491e0aaea9e9fde3c" host="10.0.0.139" Apr 13 23:32:38.386080 containerd[1460]: 2026-04-13 23:32:38.328 [INFO][5370] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 23:32:38.386080 containerd[1460]: 2026-04-13 23:32:38.328 [INFO][5370] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.100.9/26] IPv6=[] ContainerID="2e3b8973ce9ca62f27cb199065224887ce5964cf7cab675491e0aaea9e9fde3c" HandleID="k8s-pod-network.2e3b8973ce9ca62f27cb199065224887ce5964cf7cab675491e0aaea9e9fde3c" Workload="10.0.0.139-k8s-nginx--deployment--bb8f74bfb--pkfzq-eth0" Apr 13 23:32:38.386698 containerd[1460]: 2026-04-13 23:32:38.330 [INFO][5356] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2e3b8973ce9ca62f27cb199065224887ce5964cf7cab675491e0aaea9e9fde3c" Namespace="default" Pod="nginx-deployment-bb8f74bfb-pkfzq" WorkloadEndpoint="10.0.0.139-k8s-nginx--deployment--bb8f74bfb--pkfzq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.139-k8s-nginx--deployment--bb8f74bfb--pkfzq-eth0", GenerateName:"nginx-deployment-bb8f74bfb-", Namespace:"default", SelfLink:"", UID:"231d8317-ddc2-4f72-b5ca-815bb1652f76", ResourceVersion:"1682", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 23, 32, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"bb8f74bfb", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.139", ContainerID:"", Pod:"nginx-deployment-bb8f74bfb-pkfzq", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.100.9/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali0cedcf6e25e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 23:32:38.386698 containerd[1460]: 2026-04-13 23:32:38.330 [INFO][5356] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.100.9/32] ContainerID="2e3b8973ce9ca62f27cb199065224887ce5964cf7cab675491e0aaea9e9fde3c" Namespace="default" Pod="nginx-deployment-bb8f74bfb-pkfzq" WorkloadEndpoint="10.0.0.139-k8s-nginx--deployment--bb8f74bfb--pkfzq-eth0" Apr 13 23:32:38.386698 containerd[1460]: 2026-04-13 23:32:38.330 [INFO][5356] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0cedcf6e25e ContainerID="2e3b8973ce9ca62f27cb199065224887ce5964cf7cab675491e0aaea9e9fde3c" Namespace="default" Pod="nginx-deployment-bb8f74bfb-pkfzq" WorkloadEndpoint="10.0.0.139-k8s-nginx--deployment--bb8f74bfb--pkfzq-eth0" Apr 13 23:32:38.386698 containerd[1460]: 2026-04-13 23:32:38.332 [INFO][5356] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2e3b8973ce9ca62f27cb199065224887ce5964cf7cab675491e0aaea9e9fde3c" Namespace="default" Pod="nginx-deployment-bb8f74bfb-pkfzq" WorkloadEndpoint="10.0.0.139-k8s-nginx--deployment--bb8f74bfb--pkfzq-eth0" Apr 13 23:32:38.386698 containerd[1460]: 2026-04-13 23:32:38.333 [INFO][5356] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2e3b8973ce9ca62f27cb199065224887ce5964cf7cab675491e0aaea9e9fde3c" Namespace="default" Pod="nginx-deployment-bb8f74bfb-pkfzq" WorkloadEndpoint="10.0.0.139-k8s-nginx--deployment--bb8f74bfb--pkfzq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.139-k8s-nginx--deployment--bb8f74bfb--pkfzq-eth0", GenerateName:"nginx-deployment-bb8f74bfb-", Namespace:"default", SelfLink:"", UID:"231d8317-ddc2-4f72-b5ca-815bb1652f76", ResourceVersion:"1682", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 23, 32, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"bb8f74bfb", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.139", ContainerID:"2e3b8973ce9ca62f27cb199065224887ce5964cf7cab675491e0aaea9e9fde3c", Pod:"nginx-deployment-bb8f74bfb-pkfzq", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.100.9/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali0cedcf6e25e", MAC:"1e:6f:08:0f:3d:55", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 23:32:38.386698 containerd[1460]: 2026-04-13 23:32:38.383 [INFO][5356] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2e3b8973ce9ca62f27cb199065224887ce5964cf7cab675491e0aaea9e9fde3c" Namespace="default" Pod="nginx-deployment-bb8f74bfb-pkfzq" WorkloadEndpoint="10.0.0.139-k8s-nginx--deployment--bb8f74bfb--pkfzq-eth0" Apr 13 23:32:38.406072 containerd[1460]: time="2026-04-13T23:32:38.405965371Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 23:32:38.406072 containerd[1460]: time="2026-04-13T23:32:38.406023201Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 23:32:38.406072 containerd[1460]: time="2026-04-13T23:32:38.406034283Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 23:32:38.406260 containerd[1460]: time="2026-04-13T23:32:38.406104454Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 23:32:38.426031 systemd[1]: Started cri-containerd-2e3b8973ce9ca62f27cb199065224887ce5964cf7cab675491e0aaea9e9fde3c.scope - libcontainer container 2e3b8973ce9ca62f27cb199065224887ce5964cf7cab675491e0aaea9e9fde3c. Apr 13 23:32:38.437251 systemd-resolved[1389]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 13 23:32:38.469136 containerd[1460]: time="2026-04-13T23:32:38.469065760Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-bb8f74bfb-pkfzq,Uid:231d8317-ddc2-4f72-b5ca-815bb1652f76,Namespace:default,Attempt:0,} returns sandbox id \"2e3b8973ce9ca62f27cb199065224887ce5964cf7cab675491e0aaea9e9fde3c\"" Apr 13 23:32:38.471302 containerd[1460]: time="2026-04-13T23:32:38.471067743Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Apr 13 23:32:39.290456 kubelet[1759]: E0413 23:32:39.290168 1759 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 13 23:32:40.291235 kubelet[1759]: E0413 23:32:40.291133 1759 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 13 23:32:40.320058 systemd-networkd[1386]: cali0cedcf6e25e: Gained IPv6LL Apr 13 23:32:41.292377 kubelet[1759]: E0413 23:32:41.292326 1759 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 13 23:32:41.348288 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1689619883.mount: Deactivated successfully. Apr 13 23:32:42.292948 kubelet[1759]: E0413 23:32:42.292876 1759 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 13 23:32:42.965265 containerd[1460]: time="2026-04-13T23:32:42.965173856Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 23:32:42.965705 containerd[1460]: time="2026-04-13T23:32:42.965639934Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=63909824" Apr 13 23:32:42.967038 containerd[1460]: time="2026-04-13T23:32:42.966988612Z" level=info msg="ImageCreate event name:\"sha256:fda7399e8104578dd71a8491d53621a60c1c5ed6b7f3befb583d4f164244255e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 23:32:42.970673 containerd[1460]: time="2026-04-13T23:32:42.970614942Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx@sha256:37746262896e4e1a260f21898a0759befa3e3bc64a33bd95f7cd1b8400a9b03b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 23:32:42.971365 containerd[1460]: time="2026-04-13T23:32:42.971322299Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:fda7399e8104578dd71a8491d53621a60c1c5ed6b7f3befb583d4f164244255e\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:37746262896e4e1a260f21898a0759befa3e3bc64a33bd95f7cd1b8400a9b03b\", size \"63909702\" in 4.49953063s" Apr 13 23:32:42.971365 containerd[1460]: time="2026-04-13T23:32:42.971362024Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:fda7399e8104578dd71a8491d53621a60c1c5ed6b7f3befb583d4f164244255e\"" Apr 13 23:32:43.010575 containerd[1460]: time="2026-04-13T23:32:43.010486982Z" level=info msg="CreateContainer within sandbox \"2e3b8973ce9ca62f27cb199065224887ce5964cf7cab675491e0aaea9e9fde3c\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Apr 13 23:32:43.037255 containerd[1460]: time="2026-04-13T23:32:43.037002871Z" level=info msg="CreateContainer within sandbox \"2e3b8973ce9ca62f27cb199065224887ce5964cf7cab675491e0aaea9e9fde3c\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"1de681c8b0f98e08ff40f031eeb4970558dadef05d75f2823cf731b8806a1752\"" Apr 13 23:32:43.047588 containerd[1460]: time="2026-04-13T23:32:43.047157783Z" level=info msg="StartContainer for \"1de681c8b0f98e08ff40f031eeb4970558dadef05d75f2823cf731b8806a1752\"" Apr 13 23:32:43.119135 systemd[1]: Started cri-containerd-1de681c8b0f98e08ff40f031eeb4970558dadef05d75f2823cf731b8806a1752.scope - libcontainer container 1de681c8b0f98e08ff40f031eeb4970558dadef05d75f2823cf731b8806a1752. Apr 13 23:32:43.147853 containerd[1460]: time="2026-04-13T23:32:43.147779061Z" level=info msg="StartContainer for \"1de681c8b0f98e08ff40f031eeb4970558dadef05d75f2823cf731b8806a1752\" returns successfully" Apr 13 23:32:43.293499 kubelet[1759]: E0413 23:32:43.293421 1759 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 13 23:32:44.294659 kubelet[1759]: E0413 23:32:44.294583 1759 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 13 23:32:45.295044 kubelet[1759]: E0413 23:32:45.294940 1759 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 13 23:32:46.295522 kubelet[1759]: E0413 23:32:46.295430 1759 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 13 23:32:47.296828 kubelet[1759]: E0413 23:32:47.296710 1759 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 13 23:32:48.297876 kubelet[1759]: E0413 23:32:48.297772 1759 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 13 23:32:49.299064 kubelet[1759]: E0413 23:32:49.298957 1759 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 13 23:32:49.788553 kubelet[1759]: I0413 23:32:49.788429 1759 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-bb8f74bfb-pkfzq" podStartSLOduration=9.278136401 podStartE2EDuration="13.78609396s" podCreationTimestamp="2026-04-13 23:32:36 +0000 UTC" firstStartedPulling="2026-04-13 23:32:38.470539964 +0000 UTC m=+163.510811127" lastFinishedPulling="2026-04-13 23:32:42.978497542 +0000 UTC m=+168.018768686" observedRunningTime="2026-04-13 23:32:43.370515032 +0000 UTC m=+168.410786186" watchObservedRunningTime="2026-04-13 23:32:49.78609396 +0000 UTC m=+174.826365113" Apr 13 23:32:49.795213 systemd[1]: run-containerd-runc-k8s.io-8b16496d1a665454a29c859ee8671a54c0be4253ea2c952132eb94b3b8717aaf-runc.TUuh0J.mount: Deactivated successfully. Apr 13 23:32:50.161530 kubelet[1759]: E0413 23:32:50.160961 1759 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:32:50.299442 kubelet[1759]: E0413 23:32:50.299383 1759 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 13 23:32:51.300741 kubelet[1759]: E0413 23:32:51.300580 1759 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 13 23:32:51.410966 systemd[1]: Created slice kubepods-besteffort-poddf7285bb_d25d_4db2_a2b6_6b9bdaa331f9.slice - libcontainer container kubepods-besteffort-poddf7285bb_d25d_4db2_a2b6_6b9bdaa331f9.slice. Apr 13 23:32:51.411434 kubelet[1759]: I0413 23:32:51.411367 1759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/df7285bb-d25d-4db2-a2b6-6b9bdaa331f9-data\") pod \"nfs-server-provisioner-0\" (UID: \"df7285bb-d25d-4db2-a2b6-6b9bdaa331f9\") " pod="default/nfs-server-provisioner-0" Apr 13 23:32:51.411434 kubelet[1759]: I0413 23:32:51.411393 1759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wnbrv\" (UniqueName: \"kubernetes.io/projected/df7285bb-d25d-4db2-a2b6-6b9bdaa331f9-kube-api-access-wnbrv\") pod \"nfs-server-provisioner-0\" (UID: \"df7285bb-d25d-4db2-a2b6-6b9bdaa331f9\") " pod="default/nfs-server-provisioner-0" Apr 13 23:32:51.411607 kubelet[1759]: E0413 23:32:51.411541 1759 status_manager.go:1018] "Failed to get status for pod" err="pods \"nfs-server-provisioner-0\" is forbidden: User \"system:node:10.0.0.139\" cannot get resource \"pods\" in API group \"\" in the namespace \"default\": no relationship found between node '10.0.0.139' and this object" podUID="df7285bb-d25d-4db2-a2b6-6b9bdaa331f9" pod="default/nfs-server-provisioner-0" Apr 13 23:32:51.719524 containerd[1460]: time="2026-04-13T23:32:51.719349817Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:df7285bb-d25d-4db2-a2b6-6b9bdaa331f9,Namespace:default,Attempt:0,}" Apr 13 23:32:52.095092 systemd-networkd[1386]: cali60e51b789ff: Link UP Apr 13 23:32:52.095222 systemd-networkd[1386]: cali60e51b789ff: Gained carrier Apr 13 23:32:52.122838 containerd[1460]: 2026-04-13 23:32:51.828 [INFO][5602] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.139-k8s-nfs--server--provisioner--0-eth0 nfs-server-provisioner- default df7285bb-d25d-4db2-a2b6-6b9bdaa331f9 1757 0 2026-04-13 23:32:51 +0000 UTC map[app:nfs-server-provisioner apps.kubernetes.io/pod-index:0 chart:nfs-server-provisioner-1.8.0 controller-revision-hash:nfs-server-provisioner-7c9b4c458c heritage:Helm projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:nfs-server-provisioner release:nfs-server-provisioner statefulset.kubernetes.io/pod-name:nfs-server-provisioner-0] map[] [] [] []} {k8s 10.0.0.139 nfs-server-provisioner-0 eth0 nfs-server-provisioner [] [] [kns.default ksa.default.nfs-server-provisioner] cali60e51b789ff [{nfs TCP 2049 0 } {nfs-udp UDP 2049 0 } {nlockmgr TCP 32803 0 } {nlockmgr-udp UDP 32803 0 } {mountd TCP 20048 0 } {mountd-udp UDP 20048 0 } {rquotad TCP 875 0 } {rquotad-udp UDP 875 0 } {rpcbind TCP 111 0 } {rpcbind-udp UDP 111 0 } {statd TCP 662 0 } {statd-udp UDP 662 0 }] [] }} ContainerID="83041067ef2607e1e937f8a5478634dd05f4b75bbf5680607a779578e17dc765" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.139-k8s-nfs--server--provisioner--0-" Apr 13 23:32:52.122838 containerd[1460]: 2026-04-13 23:32:51.829 [INFO][5602] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="83041067ef2607e1e937f8a5478634dd05f4b75bbf5680607a779578e17dc765" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.139-k8s-nfs--server--provisioner--0-eth0" Apr 13 23:32:52.122838 containerd[1460]: 2026-04-13 23:32:51.858 [INFO][5615] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="83041067ef2607e1e937f8a5478634dd05f4b75bbf5680607a779578e17dc765" HandleID="k8s-pod-network.83041067ef2607e1e937f8a5478634dd05f4b75bbf5680607a779578e17dc765" Workload="10.0.0.139-k8s-nfs--server--provisioner--0-eth0" Apr 13 23:32:52.122838 containerd[1460]: 2026-04-13 23:32:51.879 [INFO][5615] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="83041067ef2607e1e937f8a5478634dd05f4b75bbf5680607a779578e17dc765" HandleID="k8s-pod-network.83041067ef2607e1e937f8a5478634dd05f4b75bbf5680607a779578e17dc765" Workload="10.0.0.139-k8s-nfs--server--provisioner--0-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0005820f0), Attrs:map[string]string{"namespace":"default", "node":"10.0.0.139", "pod":"nfs-server-provisioner-0", "timestamp":"2026-04-13 23:32:51.858277562 +0000 UTC"}, Hostname:"10.0.0.139", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000322c60)} Apr 13 23:32:52.122838 containerd[1460]: 2026-04-13 23:32:51.879 [INFO][5615] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 23:32:52.122838 containerd[1460]: 2026-04-13 23:32:51.879 [INFO][5615] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 23:32:52.122838 containerd[1460]: 2026-04-13 23:32:51.879 [INFO][5615] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.139' Apr 13 23:32:52.122838 containerd[1460]: 2026-04-13 23:32:51.898 [INFO][5615] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.83041067ef2607e1e937f8a5478634dd05f4b75bbf5680607a779578e17dc765" host="10.0.0.139" Apr 13 23:32:52.122838 containerd[1460]: 2026-04-13 23:32:51.942 [INFO][5615] ipam/ipam.go 409: Looking up existing affinities for host host="10.0.0.139" Apr 13 23:32:52.122838 containerd[1460]: 2026-04-13 23:32:51.963 [INFO][5615] ipam/ipam.go 526: Trying affinity for 192.168.100.0/26 host="10.0.0.139" Apr 13 23:32:52.122838 containerd[1460]: 2026-04-13 23:32:51.965 [INFO][5615] ipam/ipam.go 160: Attempting to load block cidr=192.168.100.0/26 host="10.0.0.139" Apr 13 23:32:52.122838 containerd[1460]: 2026-04-13 23:32:51.968 [INFO][5615] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.100.0/26 host="10.0.0.139" Apr 13 23:32:52.122838 containerd[1460]: 2026-04-13 23:32:51.968 [INFO][5615] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.100.0/26 handle="k8s-pod-network.83041067ef2607e1e937f8a5478634dd05f4b75bbf5680607a779578e17dc765" host="10.0.0.139" Apr 13 23:32:52.122838 containerd[1460]: 2026-04-13 23:32:51.969 [INFO][5615] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.83041067ef2607e1e937f8a5478634dd05f4b75bbf5680607a779578e17dc765 Apr 13 23:32:52.122838 containerd[1460]: 2026-04-13 23:32:51.992 [INFO][5615] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.100.0/26 handle="k8s-pod-network.83041067ef2607e1e937f8a5478634dd05f4b75bbf5680607a779578e17dc765" host="10.0.0.139" Apr 13 23:32:52.122838 containerd[1460]: 2026-04-13 23:32:52.090 [INFO][5615] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.100.10/26] block=192.168.100.0/26 handle="k8s-pod-network.83041067ef2607e1e937f8a5478634dd05f4b75bbf5680607a779578e17dc765" host="10.0.0.139" Apr 13 23:32:52.122838 containerd[1460]: 2026-04-13 23:32:52.090 [INFO][5615] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.100.10/26] handle="k8s-pod-network.83041067ef2607e1e937f8a5478634dd05f4b75bbf5680607a779578e17dc765" host="10.0.0.139" Apr 13 23:32:52.122838 containerd[1460]: 2026-04-13 23:32:52.090 [INFO][5615] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 23:32:52.122838 containerd[1460]: 2026-04-13 23:32:52.090 [INFO][5615] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.100.10/26] IPv6=[] ContainerID="83041067ef2607e1e937f8a5478634dd05f4b75bbf5680607a779578e17dc765" HandleID="k8s-pod-network.83041067ef2607e1e937f8a5478634dd05f4b75bbf5680607a779578e17dc765" Workload="10.0.0.139-k8s-nfs--server--provisioner--0-eth0" Apr 13 23:32:52.123572 containerd[1460]: 2026-04-13 23:32:52.091 [INFO][5602] cni-plugin/k8s.go 418: Populated endpoint ContainerID="83041067ef2607e1e937f8a5478634dd05f4b75bbf5680607a779578e17dc765" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.139-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.139-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"df7285bb-d25d-4db2-a2b6-6b9bdaa331f9", ResourceVersion:"1757", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 23, 32, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-7c9b4c458c", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.139", ContainerID:"", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.100.10/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 23:32:52.123572 containerd[1460]: 2026-04-13 23:32:52.092 [INFO][5602] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.100.10/32] ContainerID="83041067ef2607e1e937f8a5478634dd05f4b75bbf5680607a779578e17dc765" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.139-k8s-nfs--server--provisioner--0-eth0" Apr 13 23:32:52.123572 containerd[1460]: 2026-04-13 23:32:52.092 [INFO][5602] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali60e51b789ff ContainerID="83041067ef2607e1e937f8a5478634dd05f4b75bbf5680607a779578e17dc765" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.139-k8s-nfs--server--provisioner--0-eth0" Apr 13 23:32:52.123572 containerd[1460]: 2026-04-13 23:32:52.093 [INFO][5602] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="83041067ef2607e1e937f8a5478634dd05f4b75bbf5680607a779578e17dc765" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.139-k8s-nfs--server--provisioner--0-eth0" Apr 13 23:32:52.123750 containerd[1460]: 2026-04-13 23:32:52.094 [INFO][5602] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="83041067ef2607e1e937f8a5478634dd05f4b75bbf5680607a779578e17dc765" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.139-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.139-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"df7285bb-d25d-4db2-a2b6-6b9bdaa331f9", ResourceVersion:"1757", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 23, 32, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-7c9b4c458c", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.139", ContainerID:"83041067ef2607e1e937f8a5478634dd05f4b75bbf5680607a779578e17dc765", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.100.10/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"1e:67:93:58:d7:90", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 23:32:52.123750 containerd[1460]: 2026-04-13 23:32:52.120 [INFO][5602] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="83041067ef2607e1e937f8a5478634dd05f4b75bbf5680607a779578e17dc765" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.139-k8s-nfs--server--provisioner--0-eth0" Apr 13 23:32:52.142365 containerd[1460]: time="2026-04-13T23:32:52.142104153Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 23:32:52.142365 containerd[1460]: time="2026-04-13T23:32:52.142198693Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 23:32:52.142365 containerd[1460]: time="2026-04-13T23:32:52.142239197Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 23:32:52.142365 containerd[1460]: time="2026-04-13T23:32:52.142340449Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 23:32:52.166104 systemd[1]: Started cri-containerd-83041067ef2607e1e937f8a5478634dd05f4b75bbf5680607a779578e17dc765.scope - libcontainer container 83041067ef2607e1e937f8a5478634dd05f4b75bbf5680607a779578e17dc765. Apr 13 23:32:52.176520 systemd-resolved[1389]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 13 23:32:52.201822 containerd[1460]: time="2026-04-13T23:32:52.201767874Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:df7285bb-d25d-4db2-a2b6-6b9bdaa331f9,Namespace:default,Attempt:0,} returns sandbox id \"83041067ef2607e1e937f8a5478634dd05f4b75bbf5680607a779578e17dc765\"" Apr 13 23:32:52.203719 containerd[1460]: time="2026-04-13T23:32:52.203614429Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Apr 13 23:32:52.301459 kubelet[1759]: E0413 23:32:52.301400 1759 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 13 23:32:53.158539 kubelet[1759]: E0413 23:32:53.158450 1759 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:32:53.301824 kubelet[1759]: E0413 23:32:53.301698 1759 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 13 23:32:53.504282 systemd-networkd[1386]: cali60e51b789ff: Gained IPv6LL Apr 13 23:32:54.302901 kubelet[1759]: E0413 23:32:54.302847 1759 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 13 23:32:54.875940 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1085683247.mount: Deactivated successfully. Apr 13 23:32:55.303735 kubelet[1759]: E0413 23:32:55.303688 1759 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 13 23:32:55.978319 kubelet[1759]: E0413 23:32:55.978229 1759 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 13 23:32:56.267420 containerd[1460]: time="2026-04-13T23:32:56.267224821Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 23:32:56.268501 containerd[1460]: time="2026-04-13T23:32:56.268418411Z" level=info msg="stop pulling image registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8: active requests=0, bytes read=91039034" Apr 13 23:32:56.269807 containerd[1460]: time="2026-04-13T23:32:56.269758206Z" level=info msg="ImageCreate event name:\"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 23:32:56.283623 containerd[1460]: time="2026-04-13T23:32:56.283562044Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 23:32:56.284412 containerd[1460]: time="2026-04-13T23:32:56.284361960Z" level=info msg="Pulled image \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" with image id \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\", repo tag \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\", repo digest \"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\", size \"91036984\" in 4.080712039s" Apr 13 23:32:56.284412 containerd[1460]: time="2026-04-13T23:32:56.284399752Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" Apr 13 23:32:56.289881 containerd[1460]: time="2026-04-13T23:32:56.289778192Z" level=info msg="CreateContainer within sandbox \"83041067ef2607e1e937f8a5478634dd05f4b75bbf5680607a779578e17dc765\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Apr 13 23:32:56.304177 containerd[1460]: time="2026-04-13T23:32:56.304126121Z" level=info msg="CreateContainer within sandbox \"83041067ef2607e1e937f8a5478634dd05f4b75bbf5680607a779578e17dc765\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"a9e259c2de9ce8a1680c5433282571915765e346868cf575f2fb534dcdd8c55f\"" Apr 13 23:32:56.304474 kubelet[1759]: E0413 23:32:56.304440 1759 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 13 23:32:56.304952 containerd[1460]: time="2026-04-13T23:32:56.304862194Z" level=info msg="StartContainer for \"a9e259c2de9ce8a1680c5433282571915765e346868cf575f2fb534dcdd8c55f\"" Apr 13 23:32:56.336236 systemd[1]: Started cri-containerd-a9e259c2de9ce8a1680c5433282571915765e346868cf575f2fb534dcdd8c55f.scope - libcontainer container a9e259c2de9ce8a1680c5433282571915765e346868cf575f2fb534dcdd8c55f. Apr 13 23:32:56.361773 containerd[1460]: time="2026-04-13T23:32:56.361734904Z" level=info msg="StartContainer for \"a9e259c2de9ce8a1680c5433282571915765e346868cf575f2fb534dcdd8c55f\" returns successfully" Apr 13 23:32:57.305655 kubelet[1759]: E0413 23:32:57.305583 1759 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 13 23:32:58.306221 kubelet[1759]: E0413 23:32:58.306026 1759 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 13 23:32:59.306785 kubelet[1759]: E0413 23:32:59.306652 1759 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 13 23:33:00.307156 kubelet[1759]: E0413 23:33:00.307089 1759 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 13 23:33:01.307629 kubelet[1759]: E0413 23:33:01.307543 1759 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 13 23:33:01.985854 kubelet[1759]: I0413 23:33:01.985392 1759 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=6.903428586 podStartE2EDuration="10.985373143s" podCreationTimestamp="2026-04-13 23:32:51 +0000 UTC" firstStartedPulling="2026-04-13 23:32:52.203139908 +0000 UTC m=+177.243411052" lastFinishedPulling="2026-04-13 23:32:56.285084464 +0000 UTC m=+181.325355609" observedRunningTime="2026-04-13 23:32:57.423650573 +0000 UTC m=+182.463921725" watchObservedRunningTime="2026-04-13 23:33:01.985373143 +0000 UTC m=+187.025644298" Apr 13 23:33:02.008117 systemd[1]: Created slice kubepods-besteffort-pod1019cdf4_2a85_42b6_ab27_6f8d57756d21.slice - libcontainer container kubepods-besteffort-pod1019cdf4_2a85_42b6_ab27_6f8d57756d21.slice. Apr 13 23:33:02.136034 kubelet[1759]: I0413 23:33:02.135933 1759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-998dbc74-52b7-4c21-9ff1-348cb2e37a59\" (UniqueName: \"kubernetes.io/nfs/1019cdf4-2a85-42b6-ab27-6f8d57756d21-pvc-998dbc74-52b7-4c21-9ff1-348cb2e37a59\") pod \"test-pod-1\" (UID: \"1019cdf4-2a85-42b6-ab27-6f8d57756d21\") " pod="default/test-pod-1" Apr 13 23:33:02.136034 kubelet[1759]: I0413 23:33:02.136033 1759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7x8tx\" (UniqueName: \"kubernetes.io/projected/1019cdf4-2a85-42b6-ab27-6f8d57756d21-kube-api-access-7x8tx\") pod \"test-pod-1\" (UID: \"1019cdf4-2a85-42b6-ab27-6f8d57756d21\") " pod="default/test-pod-1" Apr 13 23:33:02.276843 kernel: FS-Cache: Loaded Apr 13 23:33:02.308306 kubelet[1759]: E0413 23:33:02.308206 1759 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 13 23:33:02.339259 kernel: RPC: Registered named UNIX socket transport module. Apr 13 23:33:02.339416 kernel: RPC: Registered udp transport module. Apr 13 23:33:02.339432 kernel: RPC: Registered tcp transport module. Apr 13 23:33:02.339960 kernel: RPC: Registered tcp-with-tls transport module. Apr 13 23:33:02.341752 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Apr 13 23:33:02.542170 kernel: NFS: Registering the id_resolver key type Apr 13 23:33:02.542374 kernel: Key type id_resolver registered Apr 13 23:33:02.542409 kernel: Key type id_legacy registered Apr 13 23:33:02.566494 nfsidmap[5899]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Apr 13 23:33:02.570582 nfsidmap[5902]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Apr 13 23:33:02.613580 containerd[1460]: time="2026-04-13T23:33:02.613529520Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:1019cdf4-2a85-42b6-ab27-6f8d57756d21,Namespace:default,Attempt:0,}" Apr 13 23:33:03.113386 systemd-networkd[1386]: cali5ec59c6bf6e: Link UP Apr 13 23:33:03.114229 systemd-networkd[1386]: cali5ec59c6bf6e: Gained carrier Apr 13 23:33:03.174277 containerd[1460]: 2026-04-13 23:33:02.693 [INFO][5906] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.139-k8s-test--pod--1-eth0 default 1019cdf4-2a85-42b6-ab27-6f8d57756d21 1839 0 2026-04-13 23:32:52 +0000 UTC map[projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 10.0.0.139 test-pod-1 eth0 default [] [] [kns.default ksa.default.default] cali5ec59c6bf6e [] [] }} ContainerID="ee48aa046377e9c202bd0db96f2e15d9626c320775e2416d22eb7fb61d6ba9bb" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.139-k8s-test--pod--1-" Apr 13 23:33:03.174277 containerd[1460]: 2026-04-13 23:33:02.694 [INFO][5906] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ee48aa046377e9c202bd0db96f2e15d9626c320775e2416d22eb7fb61d6ba9bb" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.139-k8s-test--pod--1-eth0" Apr 13 23:33:03.174277 containerd[1460]: 2026-04-13 23:33:02.833 [INFO][5918] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ee48aa046377e9c202bd0db96f2e15d9626c320775e2416d22eb7fb61d6ba9bb" HandleID="k8s-pod-network.ee48aa046377e9c202bd0db96f2e15d9626c320775e2416d22eb7fb61d6ba9bb" Workload="10.0.0.139-k8s-test--pod--1-eth0" Apr 13 23:33:03.174277 containerd[1460]: 2026-04-13 23:33:02.918 [INFO][5918] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="ee48aa046377e9c202bd0db96f2e15d9626c320775e2416d22eb7fb61d6ba9bb" HandleID="k8s-pod-network.ee48aa046377e9c202bd0db96f2e15d9626c320775e2416d22eb7fb61d6ba9bb" Workload="10.0.0.139-k8s-test--pod--1-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000135410), Attrs:map[string]string{"namespace":"default", "node":"10.0.0.139", "pod":"test-pod-1", "timestamp":"2026-04-13 23:33:02.833523928 +0000 UTC"}, Hostname:"10.0.0.139", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000294dc0)} Apr 13 23:33:03.174277 containerd[1460]: 2026-04-13 23:33:02.918 [INFO][5918] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 23:33:03.174277 containerd[1460]: 2026-04-13 23:33:02.919 [INFO][5918] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 23:33:03.174277 containerd[1460]: 2026-04-13 23:33:02.919 [INFO][5918] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.139' Apr 13 23:33:03.174277 containerd[1460]: 2026-04-13 23:33:02.945 [INFO][5918] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.ee48aa046377e9c202bd0db96f2e15d9626c320775e2416d22eb7fb61d6ba9bb" host="10.0.0.139" Apr 13 23:33:03.174277 containerd[1460]: 2026-04-13 23:33:02.962 [INFO][5918] ipam/ipam.go 409: Looking up existing affinities for host host="10.0.0.139" Apr 13 23:33:03.174277 containerd[1460]: 2026-04-13 23:33:02.997 [INFO][5918] ipam/ipam.go 526: Trying affinity for 192.168.100.0/26 host="10.0.0.139" Apr 13 23:33:03.174277 containerd[1460]: 2026-04-13 23:33:03.025 [INFO][5918] ipam/ipam.go 160: Attempting to load block cidr=192.168.100.0/26 host="10.0.0.139" Apr 13 23:33:03.174277 containerd[1460]: 2026-04-13 23:33:03.076 [INFO][5918] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.100.0/26 host="10.0.0.139" Apr 13 23:33:03.174277 containerd[1460]: 2026-04-13 23:33:03.076 [INFO][5918] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.100.0/26 handle="k8s-pod-network.ee48aa046377e9c202bd0db96f2e15d9626c320775e2416d22eb7fb61d6ba9bb" host="10.0.0.139" Apr 13 23:33:03.174277 containerd[1460]: 2026-04-13 23:33:03.087 [INFO][5918] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.ee48aa046377e9c202bd0db96f2e15d9626c320775e2416d22eb7fb61d6ba9bb Apr 13 23:33:03.174277 containerd[1460]: 2026-04-13 23:33:03.097 [INFO][5918] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.100.0/26 handle="k8s-pod-network.ee48aa046377e9c202bd0db96f2e15d9626c320775e2416d22eb7fb61d6ba9bb" host="10.0.0.139" Apr 13 23:33:03.174277 containerd[1460]: 2026-04-13 23:33:03.109 [INFO][5918] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.100.11/26] block=192.168.100.0/26 handle="k8s-pod-network.ee48aa046377e9c202bd0db96f2e15d9626c320775e2416d22eb7fb61d6ba9bb" host="10.0.0.139" Apr 13 23:33:03.174277 containerd[1460]: 2026-04-13 23:33:03.109 [INFO][5918] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.100.11/26] handle="k8s-pod-network.ee48aa046377e9c202bd0db96f2e15d9626c320775e2416d22eb7fb61d6ba9bb" host="10.0.0.139" Apr 13 23:33:03.174277 containerd[1460]: 2026-04-13 23:33:03.109 [INFO][5918] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 23:33:03.174277 containerd[1460]: 2026-04-13 23:33:03.109 [INFO][5918] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.100.11/26] IPv6=[] ContainerID="ee48aa046377e9c202bd0db96f2e15d9626c320775e2416d22eb7fb61d6ba9bb" HandleID="k8s-pod-network.ee48aa046377e9c202bd0db96f2e15d9626c320775e2416d22eb7fb61d6ba9bb" Workload="10.0.0.139-k8s-test--pod--1-eth0" Apr 13 23:33:03.174277 containerd[1460]: 2026-04-13 23:33:03.111 [INFO][5906] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ee48aa046377e9c202bd0db96f2e15d9626c320775e2416d22eb7fb61d6ba9bb" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.139-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.139-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"1019cdf4-2a85-42b6-ab27-6f8d57756d21", ResourceVersion:"1839", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 23, 32, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.139", ContainerID:"", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.100.11/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 23:33:03.175146 containerd[1460]: 2026-04-13 23:33:03.111 [INFO][5906] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.100.11/32] ContainerID="ee48aa046377e9c202bd0db96f2e15d9626c320775e2416d22eb7fb61d6ba9bb" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.139-k8s-test--pod--1-eth0" Apr 13 23:33:03.175146 containerd[1460]: 2026-04-13 23:33:03.111 [INFO][5906] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5ec59c6bf6e ContainerID="ee48aa046377e9c202bd0db96f2e15d9626c320775e2416d22eb7fb61d6ba9bb" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.139-k8s-test--pod--1-eth0" Apr 13 23:33:03.175146 containerd[1460]: 2026-04-13 23:33:03.114 [INFO][5906] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ee48aa046377e9c202bd0db96f2e15d9626c320775e2416d22eb7fb61d6ba9bb" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.139-k8s-test--pod--1-eth0" Apr 13 23:33:03.175146 containerd[1460]: 2026-04-13 23:33:03.114 [INFO][5906] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ee48aa046377e9c202bd0db96f2e15d9626c320775e2416d22eb7fb61d6ba9bb" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.139-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.139-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"1019cdf4-2a85-42b6-ab27-6f8d57756d21", ResourceVersion:"1839", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 23, 32, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.139", ContainerID:"ee48aa046377e9c202bd0db96f2e15d9626c320775e2416d22eb7fb61d6ba9bb", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.100.11/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"9e:57:d5:a6:b8:2c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 23:33:03.175146 containerd[1460]: 2026-04-13 23:33:03.134 [INFO][5906] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ee48aa046377e9c202bd0db96f2e15d9626c320775e2416d22eb7fb61d6ba9bb" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.139-k8s-test--pod--1-eth0" Apr 13 23:33:03.204079 containerd[1460]: time="2026-04-13T23:33:03.203922748Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 23:33:03.204857 containerd[1460]: time="2026-04-13T23:33:03.204732216Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 23:33:03.204857 containerd[1460]: time="2026-04-13T23:33:03.204759393Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 23:33:03.205053 containerd[1460]: time="2026-04-13T23:33:03.204879624Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 23:33:03.227038 systemd[1]: Started cri-containerd-ee48aa046377e9c202bd0db96f2e15d9626c320775e2416d22eb7fb61d6ba9bb.scope - libcontainer container ee48aa046377e9c202bd0db96f2e15d9626c320775e2416d22eb7fb61d6ba9bb. Apr 13 23:33:03.238442 systemd-resolved[1389]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 13 23:33:03.261761 containerd[1460]: time="2026-04-13T23:33:03.261721939Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:1019cdf4-2a85-42b6-ab27-6f8d57756d21,Namespace:default,Attempt:0,} returns sandbox id \"ee48aa046377e9c202bd0db96f2e15d9626c320775e2416d22eb7fb61d6ba9bb\"" Apr 13 23:33:03.263087 containerd[1460]: time="2026-04-13T23:33:03.262823845Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Apr 13 23:33:03.308625 kubelet[1759]: E0413 23:33:03.308549 1759 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 13 23:33:03.669080 containerd[1460]: time="2026-04-13T23:33:03.669010689Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 23:33:03.669848 containerd[1460]: time="2026-04-13T23:33:03.669771771Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=61" Apr 13 23:33:03.672742 containerd[1460]: time="2026-04-13T23:33:03.672675668Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:fda7399e8104578dd71a8491d53621a60c1c5ed6b7f3befb583d4f164244255e\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:37746262896e4e1a260f21898a0759befa3e3bc64a33bd95f7cd1b8400a9b03b\", size \"63909702\" in 409.819104ms" Apr 13 23:33:03.672742 containerd[1460]: time="2026-04-13T23:33:03.672706379Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:fda7399e8104578dd71a8491d53621a60c1c5ed6b7f3befb583d4f164244255e\"" Apr 13 23:33:03.676726 containerd[1460]: time="2026-04-13T23:33:03.676706384Z" level=info msg="CreateContainer within sandbox \"ee48aa046377e9c202bd0db96f2e15d9626c320775e2416d22eb7fb61d6ba9bb\" for container &ContainerMetadata{Name:test,Attempt:0,}" Apr 13 23:33:03.697488 containerd[1460]: time="2026-04-13T23:33:03.697370977Z" level=info msg="CreateContainer within sandbox \"ee48aa046377e9c202bd0db96f2e15d9626c320775e2416d22eb7fb61d6ba9bb\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"6d99880919634b86d3c3e89d33c3f53ccca005da1388f9b9e2d90b53585a260c\"" Apr 13 23:33:03.698163 containerd[1460]: time="2026-04-13T23:33:03.698133057Z" level=info msg="StartContainer for \"6d99880919634b86d3c3e89d33c3f53ccca005da1388f9b9e2d90b53585a260c\"" Apr 13 23:33:03.745182 systemd[1]: Started cri-containerd-6d99880919634b86d3c3e89d33c3f53ccca005da1388f9b9e2d90b53585a260c.scope - libcontainer container 6d99880919634b86d3c3e89d33c3f53ccca005da1388f9b9e2d90b53585a260c. Apr 13 23:33:03.817830 containerd[1460]: time="2026-04-13T23:33:03.817779280Z" level=info msg="StartContainer for \"6d99880919634b86d3c3e89d33c3f53ccca005da1388f9b9e2d90b53585a260c\" returns successfully" Apr 13 23:33:04.308835 kubelet[1759]: E0413 23:33:04.308717 1759 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 13 23:33:04.368971 kubelet[1759]: I0413 23:33:04.368853 1759 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=11.957759655 podStartE2EDuration="12.368830742s" podCreationTimestamp="2026-04-13 23:32:52 +0000 UTC" firstStartedPulling="2026-04-13 23:33:03.262459171 +0000 UTC m=+188.302730314" lastFinishedPulling="2026-04-13 23:33:03.673530257 +0000 UTC m=+188.713801401" observedRunningTime="2026-04-13 23:33:04.368454563 +0000 UTC m=+189.408725713" watchObservedRunningTime="2026-04-13 23:33:04.368830742 +0000 UTC m=+189.409101899" Apr 13 23:33:05.088189 systemd-networkd[1386]: cali5ec59c6bf6e: Gained IPv6LL Apr 13 23:33:05.311098 kubelet[1759]: E0413 23:33:05.311007 1759 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 13 23:33:06.312145 kubelet[1759]: E0413 23:33:06.312035 1759 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"