Apr 13 19:19:12.907463 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Apr 13 19:19:12.907491 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Mon Apr 13 18:04:44 -00 2026 Apr 13 19:19:12.907502 kernel: KASLR enabled Apr 13 19:19:12.907509 kernel: efi: EFI v2.7 by Ubuntu distribution of EDK II Apr 13 19:19:12.907516 kernel: efi: SMBIOS 3.0=0x139ed0000 MEMATTR=0x138595418 ACPI 2.0=0x136760018 RNG=0x13676e918 MEMRESERVE=0x136b43d18 Apr 13 19:19:12.907523 kernel: random: crng init done Apr 13 19:19:12.907531 kernel: ACPI: Early table checksum verification disabled Apr 13 19:19:12.907538 kernel: ACPI: RSDP 0x0000000136760018 000024 (v02 BOCHS ) Apr 13 19:19:12.907545 kernel: ACPI: XSDT 0x000000013676FE98 00006C (v01 BOCHS BXPC 00000001 01000013) Apr 13 19:19:12.907554 kernel: ACPI: FACP 0x000000013676FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Apr 13 19:19:12.907562 kernel: ACPI: DSDT 0x0000000136767518 001468 (v02 BOCHS BXPC 00000001 BXPC 00000001) Apr 13 19:19:12.907569 kernel: ACPI: APIC 0x000000013676FC18 000108 (v04 BOCHS BXPC 00000001 BXPC 00000001) Apr 13 19:19:12.907575 kernel: ACPI: PPTT 0x000000013676FD98 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Apr 13 19:19:12.907583 kernel: ACPI: GTDT 0x000000013676D898 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Apr 13 19:19:12.907592 kernel: ACPI: MCFG 0x000000013676FF98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 13 19:19:12.907602 kernel: ACPI: SPCR 0x000000013676E818 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Apr 13 19:19:12.907610 kernel: ACPI: DBG2 0x000000013676E898 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Apr 13 19:19:12.907617 kernel: ACPI: IORT 0x000000013676E418 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Apr 13 19:19:12.907625 kernel: ACPI: BGRT 0x000000013676E798 000038 (v01 INTEL EDK2 00000002 01000013) Apr 13 19:19:12.907632 kernel: ACPI: SPCR: console: pl011,mmio32,0x9000000,9600 Apr 13 19:19:12.907640 kernel: NUMA: Failed to initialise from firmware Apr 13 19:19:12.907648 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x0000000139ffffff] Apr 13 19:19:12.907656 kernel: NUMA: NODE_DATA [mem 0x13966f800-0x139674fff] Apr 13 19:19:12.907663 kernel: Zone ranges: Apr 13 19:19:12.907670 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Apr 13 19:19:12.907680 kernel: DMA32 empty Apr 13 19:19:12.907688 kernel: Normal [mem 0x0000000100000000-0x0000000139ffffff] Apr 13 19:19:12.907696 kernel: Movable zone start for each node Apr 13 19:19:12.907703 kernel: Early memory node ranges Apr 13 19:19:12.907711 kernel: node 0: [mem 0x0000000040000000-0x000000013676ffff] Apr 13 19:19:12.907719 kernel: node 0: [mem 0x0000000136770000-0x0000000136b3ffff] Apr 13 19:19:12.907726 kernel: node 0: [mem 0x0000000136b40000-0x0000000139e1ffff] Apr 13 19:19:12.907734 kernel: node 0: [mem 0x0000000139e20000-0x0000000139eaffff] Apr 13 19:19:12.907741 kernel: node 0: [mem 0x0000000139eb0000-0x0000000139ebffff] Apr 13 19:19:12.907749 kernel: node 0: [mem 0x0000000139ec0000-0x0000000139fdffff] Apr 13 19:19:12.907756 kernel: node 0: [mem 0x0000000139fe0000-0x0000000139ffffff] Apr 13 19:19:12.907764 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x0000000139ffffff] Apr 13 19:19:12.907773 kernel: On node 0, zone Normal: 24576 pages in unavailable ranges Apr 13 19:19:12.907780 kernel: psci: probing for conduit method from ACPI. Apr 13 19:19:12.907786 kernel: psci: PSCIv1.1 detected in firmware. Apr 13 19:19:12.907796 kernel: psci: Using standard PSCI v0.2 function IDs Apr 13 19:19:12.907803 kernel: psci: Trusted OS migration not required Apr 13 19:19:12.907810 kernel: psci: SMC Calling Convention v1.1 Apr 13 19:19:12.907818 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Apr 13 19:19:12.907825 kernel: percpu: Embedded 30 pages/cpu s85736 r8192 d28952 u122880 Apr 13 19:19:12.907832 kernel: pcpu-alloc: s85736 r8192 d28952 u122880 alloc=30*4096 Apr 13 19:19:12.907839 kernel: pcpu-alloc: [0] 0 [0] 1 Apr 13 19:19:12.907846 kernel: Detected PIPT I-cache on CPU0 Apr 13 19:19:12.907852 kernel: CPU features: detected: GIC system register CPU interface Apr 13 19:19:12.907877 kernel: CPU features: detected: Hardware dirty bit management Apr 13 19:19:12.907884 kernel: CPU features: detected: Spectre-v4 Apr 13 19:19:12.907891 kernel: CPU features: detected: Spectre-BHB Apr 13 19:19:12.907898 kernel: CPU features: kernel page table isolation forced ON by KASLR Apr 13 19:19:12.907907 kernel: CPU features: detected: Kernel page table isolation (KPTI) Apr 13 19:19:12.907914 kernel: CPU features: detected: ARM erratum 1418040 Apr 13 19:19:12.907920 kernel: CPU features: detected: SSBS not fully self-synchronizing Apr 13 19:19:12.907927 kernel: alternatives: applying boot alternatives Apr 13 19:19:12.907935 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=06a955818c1cb85215c4fc3bbca340081bcaba3fb92fe20a32668615ff23854b Apr 13 19:19:12.907943 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Apr 13 19:19:12.907950 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 13 19:19:12.907956 kernel: Fallback order for Node 0: 0 Apr 13 19:19:12.907963 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1008000 Apr 13 19:19:12.907970 kernel: Policy zone: Normal Apr 13 19:19:12.907977 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 13 19:19:12.907985 kernel: software IO TLB: area num 2. Apr 13 19:19:12.907992 kernel: software IO TLB: mapped [mem 0x00000000fbfff000-0x00000000fffff000] (64MB) Apr 13 19:19:12.908000 kernel: Memory: 3882816K/4096000K available (10304K kernel code, 2180K rwdata, 8116K rodata, 39424K init, 897K bss, 213184K reserved, 0K cma-reserved) Apr 13 19:19:12.908007 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Apr 13 19:19:12.908030 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 13 19:19:12.908038 kernel: rcu: RCU event tracing is enabled. Apr 13 19:19:12.908045 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Apr 13 19:19:12.908059 kernel: Trampoline variant of Tasks RCU enabled. Apr 13 19:19:12.908068 kernel: Tracing variant of Tasks RCU enabled. Apr 13 19:19:12.908075 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 13 19:19:12.908082 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Apr 13 19:19:12.908088 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Apr 13 19:19:12.908098 kernel: GICv3: 256 SPIs implemented Apr 13 19:19:12.908105 kernel: GICv3: 0 Extended SPIs implemented Apr 13 19:19:12.908111 kernel: Root IRQ handler: gic_handle_irq Apr 13 19:19:12.908118 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Apr 13 19:19:12.908125 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Apr 13 19:19:12.908131 kernel: ITS [mem 0x08080000-0x0809ffff] Apr 13 19:19:12.908141 kernel: ITS@0x0000000008080000: allocated 8192 Devices @1000c0000 (indirect, esz 8, psz 64K, shr 1) Apr 13 19:19:12.908148 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @1000d0000 (flat, esz 8, psz 64K, shr 1) Apr 13 19:19:12.908154 kernel: GICv3: using LPI property table @0x00000001000e0000 Apr 13 19:19:12.908161 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000001000f0000 Apr 13 19:19:12.908168 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 13 19:19:12.908176 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Apr 13 19:19:12.908183 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Apr 13 19:19:12.908190 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Apr 13 19:19:12.908197 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Apr 13 19:19:12.908204 kernel: Console: colour dummy device 80x25 Apr 13 19:19:12.908211 kernel: ACPI: Core revision 20230628 Apr 13 19:19:12.908219 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Apr 13 19:19:12.908226 kernel: pid_max: default: 32768 minimum: 301 Apr 13 19:19:12.908233 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Apr 13 19:19:12.908240 kernel: landlock: Up and running. Apr 13 19:19:12.908248 kernel: SELinux: Initializing. Apr 13 19:19:12.908255 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 13 19:19:12.908262 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 13 19:19:12.908269 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 13 19:19:12.908277 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 13 19:19:12.908283 kernel: rcu: Hierarchical SRCU implementation. Apr 13 19:19:12.908290 kernel: rcu: Max phase no-delay instances is 400. Apr 13 19:19:12.908297 kernel: Platform MSI: ITS@0x8080000 domain created Apr 13 19:19:12.908304 kernel: PCI/MSI: ITS@0x8080000 domain created Apr 13 19:19:12.908313 kernel: Remapping and enabling EFI services. Apr 13 19:19:12.908320 kernel: smp: Bringing up secondary CPUs ... Apr 13 19:19:12.908327 kernel: Detected PIPT I-cache on CPU1 Apr 13 19:19:12.908333 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Apr 13 19:19:12.908340 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000100100000 Apr 13 19:19:12.908347 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Apr 13 19:19:12.908354 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Apr 13 19:19:12.908363 kernel: smp: Brought up 1 node, 2 CPUs Apr 13 19:19:12.908369 kernel: SMP: Total of 2 processors activated. Apr 13 19:19:12.908377 kernel: CPU features: detected: 32-bit EL0 Support Apr 13 19:19:12.908386 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Apr 13 19:19:12.908393 kernel: CPU features: detected: Common not Private translations Apr 13 19:19:12.908406 kernel: CPU features: detected: CRC32 instructions Apr 13 19:19:12.908415 kernel: CPU features: detected: Enhanced Virtualization Traps Apr 13 19:19:12.908423 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Apr 13 19:19:12.908431 kernel: CPU features: detected: LSE atomic instructions Apr 13 19:19:12.908439 kernel: CPU features: detected: Privileged Access Never Apr 13 19:19:12.908446 kernel: CPU features: detected: RAS Extension Support Apr 13 19:19:12.908455 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Apr 13 19:19:12.908463 kernel: CPU: All CPU(s) started at EL1 Apr 13 19:19:12.908470 kernel: alternatives: applying system-wide alternatives Apr 13 19:19:12.908477 kernel: devtmpfs: initialized Apr 13 19:19:12.908485 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 13 19:19:12.908492 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Apr 13 19:19:12.908500 kernel: pinctrl core: initialized pinctrl subsystem Apr 13 19:19:12.908507 kernel: SMBIOS 3.0.0 present. Apr 13 19:19:12.908516 kernel: DMI: Hetzner vServer/KVM Virtual Machine, BIOS 20171111 11/11/2017 Apr 13 19:19:12.908523 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 13 19:19:12.908531 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Apr 13 19:19:12.908538 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Apr 13 19:19:12.908546 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Apr 13 19:19:12.908553 kernel: audit: initializing netlink subsys (disabled) Apr 13 19:19:12.908560 kernel: audit: type=2000 audit(0.012:1): state=initialized audit_enabled=0 res=1 Apr 13 19:19:12.908568 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 13 19:19:12.908575 kernel: cpuidle: using governor menu Apr 13 19:19:12.908584 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Apr 13 19:19:12.908591 kernel: ASID allocator initialised with 32768 entries Apr 13 19:19:12.908599 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 13 19:19:12.908606 kernel: Serial: AMBA PL011 UART driver Apr 13 19:19:12.908614 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Apr 13 19:19:12.908621 kernel: Modules: 0 pages in range for non-PLT usage Apr 13 19:19:12.908629 kernel: Modules: 509008 pages in range for PLT usage Apr 13 19:19:12.908636 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 13 19:19:12.908644 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Apr 13 19:19:12.908652 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Apr 13 19:19:12.908660 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Apr 13 19:19:12.908667 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 13 19:19:12.908675 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Apr 13 19:19:12.908682 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Apr 13 19:19:12.908689 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Apr 13 19:19:12.908696 kernel: ACPI: Added _OSI(Module Device) Apr 13 19:19:12.908704 kernel: ACPI: Added _OSI(Processor Device) Apr 13 19:19:12.908711 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 13 19:19:12.908720 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Apr 13 19:19:12.908727 kernel: ACPI: Interpreter enabled Apr 13 19:19:12.908735 kernel: ACPI: Using GIC for interrupt routing Apr 13 19:19:12.908742 kernel: ACPI: MCFG table detected, 1 entries Apr 13 19:19:12.908749 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Apr 13 19:19:12.908756 kernel: printk: console [ttyAMA0] enabled Apr 13 19:19:12.908764 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Apr 13 19:19:12.908956 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Apr 13 19:19:12.911175 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Apr 13 19:19:12.911282 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Apr 13 19:19:12.911349 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Apr 13 19:19:12.911414 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Apr 13 19:19:12.911424 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Apr 13 19:19:12.911432 kernel: PCI host bridge to bus 0000:00 Apr 13 19:19:12.911512 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Apr 13 19:19:12.911575 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Apr 13 19:19:12.911642 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Apr 13 19:19:12.911703 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Apr 13 19:19:12.911790 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Apr 13 19:19:12.911874 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x038000 Apr 13 19:19:12.911944 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x11289000-0x11289fff] Apr 13 19:19:12.912126 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000600000-0x8000603fff 64bit pref] Apr 13 19:19:12.912246 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Apr 13 19:19:12.912318 kernel: pci 0000:00:02.0: reg 0x10: [mem 0x11288000-0x11288fff] Apr 13 19:19:12.912393 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Apr 13 19:19:12.912464 kernel: pci 0000:00:02.1: reg 0x10: [mem 0x11287000-0x11287fff] Apr 13 19:19:12.912620 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Apr 13 19:19:12.912734 kernel: pci 0000:00:02.2: reg 0x10: [mem 0x11286000-0x11286fff] Apr 13 19:19:12.912820 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Apr 13 19:19:12.912888 kernel: pci 0000:00:02.3: reg 0x10: [mem 0x11285000-0x11285fff] Apr 13 19:19:12.912965 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Apr 13 19:19:12.914191 kernel: pci 0000:00:02.4: reg 0x10: [mem 0x11284000-0x11284fff] Apr 13 19:19:12.914297 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Apr 13 19:19:12.914366 kernel: pci 0000:00:02.5: reg 0x10: [mem 0x11283000-0x11283fff] Apr 13 19:19:12.914454 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Apr 13 19:19:12.914520 kernel: pci 0000:00:02.6: reg 0x10: [mem 0x11282000-0x11282fff] Apr 13 19:19:12.914593 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Apr 13 19:19:12.914662 kernel: pci 0000:00:02.7: reg 0x10: [mem 0x11281000-0x11281fff] Apr 13 19:19:12.914739 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 Apr 13 19:19:12.914806 kernel: pci 0000:00:03.0: reg 0x10: [mem 0x11280000-0x11280fff] Apr 13 19:19:12.914890 kernel: pci 0000:00:04.0: [1b36:0002] type 00 class 0x070002 Apr 13 19:19:12.914957 kernel: pci 0000:00:04.0: reg 0x10: [io 0x0000-0x0007] Apr 13 19:19:12.916118 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 Apr 13 19:19:12.916217 kernel: pci 0000:01:00.0: reg 0x14: [mem 0x11000000-0x11000fff] Apr 13 19:19:12.916290 kernel: pci 0000:01:00.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Apr 13 19:19:12.916378 kernel: pci 0000:01:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Apr 13 19:19:12.916461 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 Apr 13 19:19:12.916538 kernel: pci 0000:02:00.0: reg 0x10: [mem 0x10e00000-0x10e03fff 64bit] Apr 13 19:19:12.916617 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 Apr 13 19:19:12.916685 kernel: pci 0000:03:00.0: reg 0x14: [mem 0x10c00000-0x10c00fff] Apr 13 19:19:12.916753 kernel: pci 0000:03:00.0: reg 0x20: [mem 0x8000100000-0x8000103fff 64bit pref] Apr 13 19:19:12.916834 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 Apr 13 19:19:12.916903 kernel: pci 0000:04:00.0: reg 0x20: [mem 0x8000200000-0x8000203fff 64bit pref] Apr 13 19:19:12.916980 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 Apr 13 19:19:12.919204 kernel: pci 0000:05:00.0: reg 0x14: [mem 0x10800000-0x10800fff] Apr 13 19:19:12.919304 kernel: pci 0000:05:00.0: reg 0x20: [mem 0x8000300000-0x8000303fff 64bit pref] Apr 13 19:19:12.919389 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 Apr 13 19:19:12.919463 kernel: pci 0000:06:00.0: reg 0x14: [mem 0x10600000-0x10600fff] Apr 13 19:19:12.919539 kernel: pci 0000:06:00.0: reg 0x20: [mem 0x8000400000-0x8000403fff 64bit pref] Apr 13 19:19:12.919637 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 Apr 13 19:19:12.919714 kernel: pci 0000:07:00.0: reg 0x14: [mem 0x10400000-0x10400fff] Apr 13 19:19:12.919858 kernel: pci 0000:07:00.0: reg 0x20: [mem 0x8000500000-0x8000503fff 64bit pref] Apr 13 19:19:12.919930 kernel: pci 0000:07:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Apr 13 19:19:12.920004 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Apr 13 19:19:12.920216 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 01] add_size 100000 add_align 100000 Apr 13 19:19:12.920290 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff] to [bus 01] add_size 100000 add_align 100000 Apr 13 19:19:12.920369 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Apr 13 19:19:12.920434 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Apr 13 19:19:12.920499 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x001fffff] to [bus 02] add_size 100000 add_align 100000 Apr 13 19:19:12.920570 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Apr 13 19:19:12.920634 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 03] add_size 100000 add_align 100000 Apr 13 19:19:12.920700 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff] to [bus 03] add_size 100000 add_align 100000 Apr 13 19:19:12.920771 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Apr 13 19:19:12.920838 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 04] add_size 100000 add_align 100000 Apr 13 19:19:12.920907 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 Apr 13 19:19:12.920975 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Apr 13 19:19:12.922198 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 05] add_size 100000 add_align 100000 Apr 13 19:19:12.922358 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff] to [bus 05] add_size 100000 add_align 100000 Apr 13 19:19:12.922456 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Apr 13 19:19:12.922523 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 06] add_size 100000 add_align 100000 Apr 13 19:19:12.922590 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff] to [bus 06] add_size 100000 add_align 100000 Apr 13 19:19:12.922674 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Apr 13 19:19:12.922740 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 07] add_size 100000 add_align 100000 Apr 13 19:19:12.922806 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff] to [bus 07] add_size 100000 add_align 100000 Apr 13 19:19:12.922877 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Apr 13 19:19:12.922946 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 08] add_size 200000 add_align 100000 Apr 13 19:19:12.923029 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff] to [bus 08] add_size 200000 add_align 100000 Apr 13 19:19:12.923156 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Apr 13 19:19:12.923233 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 09] add_size 200000 add_align 100000 Apr 13 19:19:12.923307 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 09] add_size 200000 add_align 100000 Apr 13 19:19:12.923381 kernel: pci 0000:00:02.0: BAR 14: assigned [mem 0x10000000-0x101fffff] Apr 13 19:19:12.923450 kernel: pci 0000:00:02.0: BAR 15: assigned [mem 0x8000000000-0x80001fffff 64bit pref] Apr 13 19:19:12.923522 kernel: pci 0000:00:02.1: BAR 14: assigned [mem 0x10200000-0x103fffff] Apr 13 19:19:12.923592 kernel: pci 0000:00:02.1: BAR 15: assigned [mem 0x8000200000-0x80003fffff 64bit pref] Apr 13 19:19:12.923664 kernel: pci 0000:00:02.2: BAR 14: assigned [mem 0x10400000-0x105fffff] Apr 13 19:19:12.923731 kernel: pci 0000:00:02.2: BAR 15: assigned [mem 0x8000400000-0x80005fffff 64bit pref] Apr 13 19:19:12.923806 kernel: pci 0000:00:02.3: BAR 14: assigned [mem 0x10600000-0x107fffff] Apr 13 19:19:12.923874 kernel: pci 0000:00:02.3: BAR 15: assigned [mem 0x8000600000-0x80007fffff 64bit pref] Apr 13 19:19:12.923942 kernel: pci 0000:00:02.4: BAR 14: assigned [mem 0x10800000-0x109fffff] Apr 13 19:19:12.927052 kernel: pci 0000:00:02.4: BAR 15: assigned [mem 0x8000800000-0x80009fffff 64bit pref] Apr 13 19:19:12.927265 kernel: pci 0000:00:02.5: BAR 14: assigned [mem 0x10a00000-0x10bfffff] Apr 13 19:19:12.927344 kernel: pci 0000:00:02.5: BAR 15: assigned [mem 0x8000a00000-0x8000bfffff 64bit pref] Apr 13 19:19:12.927430 kernel: pci 0000:00:02.6: BAR 14: assigned [mem 0x10c00000-0x10dfffff] Apr 13 19:19:12.927501 kernel: pci 0000:00:02.6: BAR 15: assigned [mem 0x8000c00000-0x8000dfffff 64bit pref] Apr 13 19:19:12.927579 kernel: pci 0000:00:02.7: BAR 14: assigned [mem 0x10e00000-0x10ffffff] Apr 13 19:19:12.927649 kernel: pci 0000:00:02.7: BAR 15: assigned [mem 0x8000e00000-0x8000ffffff 64bit pref] Apr 13 19:19:12.927718 kernel: pci 0000:00:03.0: BAR 14: assigned [mem 0x11000000-0x111fffff] Apr 13 19:19:12.927786 kernel: pci 0000:00:03.0: BAR 15: assigned [mem 0x8001000000-0x80011fffff 64bit pref] Apr 13 19:19:12.927857 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8001200000-0x8001203fff 64bit pref] Apr 13 19:19:12.927926 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x11200000-0x11200fff] Apr 13 19:19:12.927997 kernel: pci 0000:00:02.0: BAR 0: assigned [mem 0x11201000-0x11201fff] Apr 13 19:19:12.928150 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] Apr 13 19:19:12.928229 kernel: pci 0000:00:02.1: BAR 0: assigned [mem 0x11202000-0x11202fff] Apr 13 19:19:12.928296 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] Apr 13 19:19:12.928367 kernel: pci 0000:00:02.2: BAR 0: assigned [mem 0x11203000-0x11203fff] Apr 13 19:19:12.928437 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] Apr 13 19:19:12.928511 kernel: pci 0000:00:02.3: BAR 0: assigned [mem 0x11204000-0x11204fff] Apr 13 19:19:12.928584 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] Apr 13 19:19:12.928654 kernel: pci 0000:00:02.4: BAR 0: assigned [mem 0x11205000-0x11205fff] Apr 13 19:19:12.928720 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] Apr 13 19:19:12.928789 kernel: pci 0000:00:02.5: BAR 0: assigned [mem 0x11206000-0x11206fff] Apr 13 19:19:12.928857 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] Apr 13 19:19:12.929003 kernel: pci 0000:00:02.6: BAR 0: assigned [mem 0x11207000-0x11207fff] Apr 13 19:19:12.931249 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] Apr 13 19:19:12.931328 kernel: pci 0000:00:02.7: BAR 0: assigned [mem 0x11208000-0x11208fff] Apr 13 19:19:12.931405 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] Apr 13 19:19:12.931477 kernel: pci 0000:00:03.0: BAR 0: assigned [mem 0x11209000-0x11209fff] Apr 13 19:19:12.931544 kernel: pci 0000:00:03.0: BAR 13: assigned [io 0x9000-0x9fff] Apr 13 19:19:12.931617 kernel: pci 0000:00:04.0: BAR 0: assigned [io 0xa000-0xa007] Apr 13 19:19:12.931696 kernel: pci 0000:01:00.0: BAR 6: assigned [mem 0x10000000-0x1007ffff pref] Apr 13 19:19:12.931766 kernel: pci 0000:01:00.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Apr 13 19:19:12.931837 kernel: pci 0000:01:00.0: BAR 1: assigned [mem 0x10080000-0x10080fff] Apr 13 19:19:12.931908 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Apr 13 19:19:12.931980 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Apr 13 19:19:12.932084 kernel: pci 0000:00:02.0: bridge window [mem 0x10000000-0x101fffff] Apr 13 19:19:12.932161 kernel: pci 0000:00:02.0: bridge window [mem 0x8000000000-0x80001fffff 64bit pref] Apr 13 19:19:12.932238 kernel: pci 0000:02:00.0: BAR 0: assigned [mem 0x10200000-0x10203fff 64bit] Apr 13 19:19:12.932314 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Apr 13 19:19:12.932383 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Apr 13 19:19:12.932450 kernel: pci 0000:00:02.1: bridge window [mem 0x10200000-0x103fffff] Apr 13 19:19:12.932518 kernel: pci 0000:00:02.1: bridge window [mem 0x8000200000-0x80003fffff 64bit pref] Apr 13 19:19:12.932613 kernel: pci 0000:03:00.0: BAR 4: assigned [mem 0x8000400000-0x8000403fff 64bit pref] Apr 13 19:19:12.932689 kernel: pci 0000:03:00.0: BAR 1: assigned [mem 0x10400000-0x10400fff] Apr 13 19:19:12.932759 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Apr 13 19:19:12.932838 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Apr 13 19:19:12.932911 kernel: pci 0000:00:02.2: bridge window [mem 0x10400000-0x105fffff] Apr 13 19:19:12.932978 kernel: pci 0000:00:02.2: bridge window [mem 0x8000400000-0x80005fffff 64bit pref] Apr 13 19:19:12.934235 kernel: pci 0000:04:00.0: BAR 4: assigned [mem 0x8000600000-0x8000603fff 64bit pref] Apr 13 19:19:12.934415 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Apr 13 19:19:12.934543 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Apr 13 19:19:12.934622 kernel: pci 0000:00:02.3: bridge window [mem 0x10600000-0x107fffff] Apr 13 19:19:12.934691 kernel: pci 0000:00:02.3: bridge window [mem 0x8000600000-0x80007fffff 64bit pref] Apr 13 19:19:12.934766 kernel: pci 0000:05:00.0: BAR 4: assigned [mem 0x8000800000-0x8000803fff 64bit pref] Apr 13 19:19:12.934844 kernel: pci 0000:05:00.0: BAR 1: assigned [mem 0x10800000-0x10800fff] Apr 13 19:19:12.934914 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Apr 13 19:19:12.936122 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Apr 13 19:19:12.936238 kernel: pci 0000:00:02.4: bridge window [mem 0x10800000-0x109fffff] Apr 13 19:19:12.936308 kernel: pci 0000:00:02.4: bridge window [mem 0x8000800000-0x80009fffff 64bit pref] Apr 13 19:19:12.936386 kernel: pci 0000:06:00.0: BAR 4: assigned [mem 0x8000a00000-0x8000a03fff 64bit pref] Apr 13 19:19:12.936457 kernel: pci 0000:06:00.0: BAR 1: assigned [mem 0x10a00000-0x10a00fff] Apr 13 19:19:12.936527 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Apr 13 19:19:12.936601 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Apr 13 19:19:12.936668 kernel: pci 0000:00:02.5: bridge window [mem 0x10a00000-0x10bfffff] Apr 13 19:19:12.936733 kernel: pci 0000:00:02.5: bridge window [mem 0x8000a00000-0x8000bfffff 64bit pref] Apr 13 19:19:12.936808 kernel: pci 0000:07:00.0: BAR 6: assigned [mem 0x10c00000-0x10c7ffff pref] Apr 13 19:19:12.936886 kernel: pci 0000:07:00.0: BAR 4: assigned [mem 0x8000c00000-0x8000c03fff 64bit pref] Apr 13 19:19:12.936958 kernel: pci 0000:07:00.0: BAR 1: assigned [mem 0x10c80000-0x10c80fff] Apr 13 19:19:12.938152 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Apr 13 19:19:12.938249 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Apr 13 19:19:12.938338 kernel: pci 0000:00:02.6: bridge window [mem 0x10c00000-0x10dfffff] Apr 13 19:19:12.938406 kernel: pci 0000:00:02.6: bridge window [mem 0x8000c00000-0x8000dfffff 64bit pref] Apr 13 19:19:12.938478 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Apr 13 19:19:12.938622 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Apr 13 19:19:12.938690 kernel: pci 0000:00:02.7: bridge window [mem 0x10e00000-0x10ffffff] Apr 13 19:19:12.938758 kernel: pci 0000:00:02.7: bridge window [mem 0x8000e00000-0x8000ffffff 64bit pref] Apr 13 19:19:12.938831 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Apr 13 19:19:12.938900 kernel: pci 0000:00:03.0: bridge window [io 0x9000-0x9fff] Apr 13 19:19:12.938975 kernel: pci 0000:00:03.0: bridge window [mem 0x11000000-0x111fffff] Apr 13 19:19:12.939145 kernel: pci 0000:00:03.0: bridge window [mem 0x8001000000-0x80011fffff 64bit pref] Apr 13 19:19:12.939227 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Apr 13 19:19:12.939299 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Apr 13 19:19:12.939371 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Apr 13 19:19:12.939466 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Apr 13 19:19:12.939541 kernel: pci_bus 0000:01: resource 1 [mem 0x10000000-0x101fffff] Apr 13 19:19:12.939612 kernel: pci_bus 0000:01: resource 2 [mem 0x8000000000-0x80001fffff 64bit pref] Apr 13 19:19:12.939686 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x2fff] Apr 13 19:19:12.939749 kernel: pci_bus 0000:02: resource 1 [mem 0x10200000-0x103fffff] Apr 13 19:19:12.939811 kernel: pci_bus 0000:02: resource 2 [mem 0x8000200000-0x80003fffff 64bit pref] Apr 13 19:19:12.939882 kernel: pci_bus 0000:03: resource 0 [io 0x3000-0x3fff] Apr 13 19:19:12.939949 kernel: pci_bus 0000:03: resource 1 [mem 0x10400000-0x105fffff] Apr 13 19:19:12.942173 kernel: pci_bus 0000:03: resource 2 [mem 0x8000400000-0x80005fffff 64bit pref] Apr 13 19:19:12.942321 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] Apr 13 19:19:12.942427 kernel: pci_bus 0000:04: resource 1 [mem 0x10600000-0x107fffff] Apr 13 19:19:12.942533 kernel: pci_bus 0000:04: resource 2 [mem 0x8000600000-0x80007fffff 64bit pref] Apr 13 19:19:12.942612 kernel: pci_bus 0000:05: resource 0 [io 0x5000-0x5fff] Apr 13 19:19:12.942675 kernel: pci_bus 0000:05: resource 1 [mem 0x10800000-0x109fffff] Apr 13 19:19:12.942736 kernel: pci_bus 0000:05: resource 2 [mem 0x8000800000-0x80009fffff 64bit pref] Apr 13 19:19:12.942808 kernel: pci_bus 0000:06: resource 0 [io 0x6000-0x6fff] Apr 13 19:19:12.942871 kernel: pci_bus 0000:06: resource 1 [mem 0x10a00000-0x10bfffff] Apr 13 19:19:12.942944 kernel: pci_bus 0000:06: resource 2 [mem 0x8000a00000-0x8000bfffff 64bit pref] Apr 13 19:19:12.943033 kernel: pci_bus 0000:07: resource 0 [io 0x7000-0x7fff] Apr 13 19:19:12.943122 kernel: pci_bus 0000:07: resource 1 [mem 0x10c00000-0x10dfffff] Apr 13 19:19:12.943188 kernel: pci_bus 0000:07: resource 2 [mem 0x8000c00000-0x8000dfffff 64bit pref] Apr 13 19:19:12.943259 kernel: pci_bus 0000:08: resource 0 [io 0x8000-0x8fff] Apr 13 19:19:12.943321 kernel: pci_bus 0000:08: resource 1 [mem 0x10e00000-0x10ffffff] Apr 13 19:19:12.943384 kernel: pci_bus 0000:08: resource 2 [mem 0x8000e00000-0x8000ffffff 64bit pref] Apr 13 19:19:12.943455 kernel: pci_bus 0000:09: resource 0 [io 0x9000-0x9fff] Apr 13 19:19:12.943517 kernel: pci_bus 0000:09: resource 1 [mem 0x11000000-0x111fffff] Apr 13 19:19:12.943583 kernel: pci_bus 0000:09: resource 2 [mem 0x8001000000-0x80011fffff 64bit pref] Apr 13 19:19:12.943593 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Apr 13 19:19:12.943601 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Apr 13 19:19:12.943609 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Apr 13 19:19:12.943617 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Apr 13 19:19:12.943625 kernel: iommu: Default domain type: Translated Apr 13 19:19:12.943633 kernel: iommu: DMA domain TLB invalidation policy: strict mode Apr 13 19:19:12.943641 kernel: efivars: Registered efivars operations Apr 13 19:19:12.943649 kernel: vgaarb: loaded Apr 13 19:19:12.943659 kernel: clocksource: Switched to clocksource arch_sys_counter Apr 13 19:19:12.943667 kernel: VFS: Disk quotas dquot_6.6.0 Apr 13 19:19:12.943675 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 13 19:19:12.943685 kernel: pnp: PnP ACPI init Apr 13 19:19:12.943770 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Apr 13 19:19:12.943781 kernel: pnp: PnP ACPI: found 1 devices Apr 13 19:19:12.943789 kernel: NET: Registered PF_INET protocol family Apr 13 19:19:12.943798 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 13 19:19:12.943809 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Apr 13 19:19:12.943817 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 13 19:19:12.943825 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Apr 13 19:19:12.943833 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Apr 13 19:19:12.943841 kernel: TCP: Hash tables configured (established 32768 bind 32768) Apr 13 19:19:12.943849 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 13 19:19:12.943857 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 13 19:19:12.943865 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 13 19:19:12.943946 kernel: pci 0000:02:00.0: enabling device (0000 -> 0002) Apr 13 19:19:12.943960 kernel: PCI: CLS 0 bytes, default 64 Apr 13 19:19:12.943968 kernel: kvm [1]: HYP mode not available Apr 13 19:19:12.943976 kernel: Initialise system trusted keyrings Apr 13 19:19:12.943984 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Apr 13 19:19:12.943992 kernel: Key type asymmetric registered Apr 13 19:19:12.944000 kernel: Asymmetric key parser 'x509' registered Apr 13 19:19:12.944008 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Apr 13 19:19:12.947121 kernel: io scheduler mq-deadline registered Apr 13 19:19:12.947130 kernel: io scheduler kyber registered Apr 13 19:19:12.947148 kernel: io scheduler bfq registered Apr 13 19:19:12.947157 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Apr 13 19:19:12.947308 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 50 Apr 13 19:19:12.947385 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 50 Apr 13 19:19:12.947455 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 13 19:19:12.947531 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 51 Apr 13 19:19:12.947602 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 51 Apr 13 19:19:12.947675 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 13 19:19:12.947748 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 52 Apr 13 19:19:12.947817 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 52 Apr 13 19:19:12.947884 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 13 19:19:12.947956 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 53 Apr 13 19:19:12.948048 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 53 Apr 13 19:19:12.948143 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 13 19:19:12.948219 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 54 Apr 13 19:19:12.948287 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 54 Apr 13 19:19:12.948358 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 13 19:19:12.948432 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 55 Apr 13 19:19:12.948504 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 55 Apr 13 19:19:12.948605 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 13 19:19:12.948685 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 56 Apr 13 19:19:12.948754 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 56 Apr 13 19:19:12.948822 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 13 19:19:12.948895 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 57 Apr 13 19:19:12.948968 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 57 Apr 13 19:19:12.950343 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 13 19:19:12.950373 kernel: ACPI: \_SB_.PCI0.GSI3: Enabled at IRQ 38 Apr 13 19:19:12.950449 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 58 Apr 13 19:19:12.950515 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 58 Apr 13 19:19:12.950637 kernel: pcieport 0000:00:03.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 13 19:19:12.950653 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Apr 13 19:19:12.950672 kernel: ACPI: button: Power Button [PWRB] Apr 13 19:19:12.950680 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Apr 13 19:19:12.950767 kernel: virtio-pci 0000:04:00.0: enabling device (0000 -> 0002) Apr 13 19:19:12.950844 kernel: virtio-pci 0000:07:00.0: enabling device (0000 -> 0002) Apr 13 19:19:12.950856 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 13 19:19:12.950864 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Apr 13 19:19:12.950934 kernel: serial 0000:00:04.0: enabling device (0000 -> 0001) Apr 13 19:19:12.950944 kernel: 0000:00:04.0: ttyS0 at I/O 0xa000 (irq = 45, base_baud = 115200) is a 16550A Apr 13 19:19:12.950952 kernel: thunder_xcv, ver 1.0 Apr 13 19:19:12.950963 kernel: thunder_bgx, ver 1.0 Apr 13 19:19:12.950971 kernel: nicpf, ver 1.0 Apr 13 19:19:12.950979 kernel: nicvf, ver 1.0 Apr 13 19:19:12.951138 kernel: rtc-efi rtc-efi.0: registered as rtc0 Apr 13 19:19:12.951218 kernel: rtc-efi rtc-efi.0: setting system clock to 2026-04-13T19:19:12 UTC (1776107952) Apr 13 19:19:12.951230 kernel: hid: raw HID events driver (C) Jiri Kosina Apr 13 19:19:12.951238 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Apr 13 19:19:12.951246 kernel: watchdog: Delayed init of the lockup detector failed: -19 Apr 13 19:19:12.951257 kernel: watchdog: Hard watchdog permanently disabled Apr 13 19:19:12.951265 kernel: NET: Registered PF_INET6 protocol family Apr 13 19:19:12.951272 kernel: Segment Routing with IPv6 Apr 13 19:19:12.951280 kernel: In-situ OAM (IOAM) with IPv6 Apr 13 19:19:12.951288 kernel: NET: Registered PF_PACKET protocol family Apr 13 19:19:12.951296 kernel: Key type dns_resolver registered Apr 13 19:19:12.951305 kernel: registered taskstats version 1 Apr 13 19:19:12.951313 kernel: Loading compiled-in X.509 certificates Apr 13 19:19:12.951321 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: 51f707dd0fb1eacaaa32bdbd733952de038a5bd7' Apr 13 19:19:12.951331 kernel: Key type .fscrypt registered Apr 13 19:19:12.951338 kernel: Key type fscrypt-provisioning registered Apr 13 19:19:12.951346 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 13 19:19:12.951354 kernel: ima: Allocated hash algorithm: sha1 Apr 13 19:19:12.951362 kernel: ima: No architecture policies found Apr 13 19:19:12.951371 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Apr 13 19:19:12.951379 kernel: clk: Disabling unused clocks Apr 13 19:19:12.951387 kernel: Freeing unused kernel memory: 39424K Apr 13 19:19:12.951395 kernel: Run /init as init process Apr 13 19:19:12.951402 kernel: with arguments: Apr 13 19:19:12.951412 kernel: /init Apr 13 19:19:12.951420 kernel: with environment: Apr 13 19:19:12.951427 kernel: HOME=/ Apr 13 19:19:12.951435 kernel: TERM=linux Apr 13 19:19:12.951445 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 13 19:19:12.951455 systemd[1]: Detected virtualization kvm. Apr 13 19:19:12.951463 systemd[1]: Detected architecture arm64. Apr 13 19:19:12.951473 systemd[1]: Running in initrd. Apr 13 19:19:12.951481 systemd[1]: No hostname configured, using default hostname. Apr 13 19:19:12.951489 systemd[1]: Hostname set to . Apr 13 19:19:12.951498 systemd[1]: Initializing machine ID from VM UUID. Apr 13 19:19:12.951506 systemd[1]: Queued start job for default target initrd.target. Apr 13 19:19:12.951514 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 13 19:19:12.951523 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 13 19:19:12.951532 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 13 19:19:12.951542 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 13 19:19:12.951551 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 13 19:19:12.951559 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 13 19:19:12.951569 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 13 19:19:12.951578 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 13 19:19:12.951588 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 13 19:19:12.951598 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 13 19:19:12.951608 systemd[1]: Reached target paths.target - Path Units. Apr 13 19:19:12.951616 systemd[1]: Reached target slices.target - Slice Units. Apr 13 19:19:12.951624 systemd[1]: Reached target swap.target - Swaps. Apr 13 19:19:12.951633 systemd[1]: Reached target timers.target - Timer Units. Apr 13 19:19:12.951641 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 13 19:19:12.951649 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 13 19:19:12.951658 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 13 19:19:12.951666 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 13 19:19:12.951676 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 13 19:19:12.951686 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 13 19:19:12.951695 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 13 19:19:12.951703 systemd[1]: Reached target sockets.target - Socket Units. Apr 13 19:19:12.951712 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 13 19:19:12.951720 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 13 19:19:12.951730 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 13 19:19:12.951738 systemd[1]: Starting systemd-fsck-usr.service... Apr 13 19:19:12.951747 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 13 19:19:12.951757 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 13 19:19:12.951765 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 13 19:19:12.951773 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 13 19:19:12.951807 systemd-journald[236]: Collecting audit messages is disabled. Apr 13 19:19:12.951831 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 13 19:19:12.951839 systemd[1]: Finished systemd-fsck-usr.service. Apr 13 19:19:12.951848 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 13 19:19:12.951857 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 13 19:19:12.951867 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 13 19:19:12.951876 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 13 19:19:12.951888 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 13 19:19:12.951898 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 13 19:19:12.951909 systemd-journald[236]: Journal started Apr 13 19:19:12.951929 systemd-journald[236]: Runtime Journal (/run/log/journal/4ec0a2a2a9864b568f1a40519af511fc) is 8.0M, max 76.6M, 68.6M free. Apr 13 19:19:12.925429 systemd-modules-load[237]: Inserted module 'overlay' Apr 13 19:19:12.956617 systemd[1]: Started systemd-journald.service - Journal Service. Apr 13 19:19:12.956644 kernel: Bridge firewalling registered Apr 13 19:19:12.954652 systemd-modules-load[237]: Inserted module 'br_netfilter' Apr 13 19:19:12.959305 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 13 19:19:12.965129 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 13 19:19:12.973283 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 13 19:19:12.981293 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 13 19:19:12.986582 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 13 19:19:12.996312 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 13 19:19:12.997189 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 13 19:19:13.011707 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 13 19:19:13.018041 dracut-cmdline[267]: dracut-dracut-053 Apr 13 19:19:13.021737 dracut-cmdline[267]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=06a955818c1cb85215c4fc3bbca340081bcaba3fb92fe20a32668615ff23854b Apr 13 19:19:13.024931 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 13 19:19:13.057262 systemd-resolved[279]: Positive Trust Anchors: Apr 13 19:19:13.057280 systemd-resolved[279]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 13 19:19:13.057312 systemd-resolved[279]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 13 19:19:13.062978 systemd-resolved[279]: Defaulting to hostname 'linux'. Apr 13 19:19:13.064168 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 13 19:19:13.064822 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 13 19:19:13.137078 kernel: SCSI subsystem initialized Apr 13 19:19:13.142061 kernel: Loading iSCSI transport class v2.0-870. Apr 13 19:19:13.151047 kernel: iscsi: registered transport (tcp) Apr 13 19:19:13.166125 kernel: iscsi: registered transport (qla4xxx) Apr 13 19:19:13.166208 kernel: QLogic iSCSI HBA Driver Apr 13 19:19:13.216443 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 13 19:19:13.222359 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 13 19:19:13.241412 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 13 19:19:13.241490 kernel: device-mapper: uevent: version 1.0.3 Apr 13 19:19:13.242417 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Apr 13 19:19:13.300101 kernel: raid6: neonx8 gen() 15700 MB/s Apr 13 19:19:13.312070 kernel: raid6: neonx4 gen() 15572 MB/s Apr 13 19:19:13.329109 kernel: raid6: neonx2 gen() 13230 MB/s Apr 13 19:19:13.346234 kernel: raid6: neonx1 gen() 10400 MB/s Apr 13 19:19:13.363093 kernel: raid6: int64x8 gen() 6776 MB/s Apr 13 19:19:13.380136 kernel: raid6: int64x4 gen() 7305 MB/s Apr 13 19:19:13.397085 kernel: raid6: int64x2 gen() 6099 MB/s Apr 13 19:19:13.414100 kernel: raid6: int64x1 gen() 5034 MB/s Apr 13 19:19:13.414184 kernel: raid6: using algorithm neonx8 gen() 15700 MB/s Apr 13 19:19:13.431175 kernel: raid6: .... xor() 11812 MB/s, rmw enabled Apr 13 19:19:13.431250 kernel: raid6: using neon recovery algorithm Apr 13 19:19:13.436319 kernel: xor: measuring software checksum speed Apr 13 19:19:13.436404 kernel: 8regs : 19764 MB/sec Apr 13 19:19:13.437251 kernel: 32regs : 19627 MB/sec Apr 13 19:19:13.437305 kernel: arm64_neon : 27070 MB/sec Apr 13 19:19:13.437327 kernel: xor: using function: arm64_neon (27070 MB/sec) Apr 13 19:19:13.490106 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 13 19:19:13.509211 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 13 19:19:13.516307 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 13 19:19:13.531790 systemd-udevd[454]: Using default interface naming scheme 'v255'. Apr 13 19:19:13.535441 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 13 19:19:13.543410 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 13 19:19:13.560603 dracut-pre-trigger[461]: rd.md=0: removing MD RAID activation Apr 13 19:19:13.599119 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 13 19:19:13.607267 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 13 19:19:13.659296 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 13 19:19:13.673343 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 13 19:19:13.692896 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 13 19:19:13.694929 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 13 19:19:13.696401 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 13 19:19:13.697479 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 13 19:19:13.709594 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 13 19:19:13.731250 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 13 19:19:13.783993 kernel: scsi host0: Virtio SCSI HBA Apr 13 19:19:13.791656 kernel: scsi 0:0:0:0: CD-ROM QEMU QEMU CD-ROM 2.5+ PQ: 0 ANSI: 5 Apr 13 19:19:13.791932 kernel: scsi 0:0:0:1: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Apr 13 19:19:13.803038 kernel: ACPI: bus type USB registered Apr 13 19:19:13.805044 kernel: usbcore: registered new interface driver usbfs Apr 13 19:19:13.805106 kernel: usbcore: registered new interface driver hub Apr 13 19:19:13.806068 kernel: usbcore: registered new device driver usb Apr 13 19:19:13.807229 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 13 19:19:13.807364 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 13 19:19:13.809619 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 13 19:19:13.810361 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 13 19:19:13.810536 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 13 19:19:13.811942 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 13 19:19:13.820659 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 13 19:19:13.845117 kernel: sr 0:0:0:0: Power-on or device reset occurred Apr 13 19:19:13.849326 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 13 19:19:13.852170 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 16x/50x cd/rw xa/form2 cdda tray Apr 13 19:19:13.852406 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Apr 13 19:19:13.855075 kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0 Apr 13 19:19:13.855472 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 13 19:19:13.867300 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Apr 13 19:19:13.869528 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Apr 13 19:19:13.870981 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Apr 13 19:19:13.871609 kernel: sd 0:0:0:1: Power-on or device reset occurred Apr 13 19:19:13.871959 kernel: sd 0:0:0:1: [sda] 80003072 512-byte logical blocks: (41.0 GB/38.1 GiB) Apr 13 19:19:13.872162 kernel: sd 0:0:0:1: [sda] Write Protect is off Apr 13 19:19:13.872280 kernel: sd 0:0:0:1: [sda] Mode Sense: 63 00 00 08 Apr 13 19:19:13.872412 kernel: sd 0:0:0:1: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Apr 13 19:19:13.875300 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Apr 13 19:19:13.875358 kernel: GPT:17805311 != 80003071 Apr 13 19:19:13.875369 kernel: GPT:Alternate GPT header not at the end of the disk. Apr 13 19:19:13.875379 kernel: GPT:17805311 != 80003071 Apr 13 19:19:13.875389 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 13 19:19:13.876086 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 13 19:19:13.876781 kernel: sd 0:0:0:1: [sda] Attached SCSI disk Apr 13 19:19:13.876982 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Apr 13 19:19:13.877147 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Apr 13 19:19:13.880120 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Apr 13 19:19:13.885095 kernel: hub 1-0:1.0: USB hub found Apr 13 19:19:13.885421 kernel: hub 1-0:1.0: 4 ports detected Apr 13 19:19:13.886711 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Apr 13 19:19:13.889039 kernel: hub 2-0:1.0: USB hub found Apr 13 19:19:13.891077 kernel: hub 2-0:1.0: 4 ports detected Apr 13 19:19:13.895382 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 13 19:19:13.935040 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/sda6 scanned by (udev-worker) (499) Apr 13 19:19:13.935166 kernel: BTRFS: device fsid ed38fcff-9752-482a-82dd-c0f0fcf94cdd devid 1 transid 33 /dev/sda3 scanned by (udev-worker) (505) Apr 13 19:19:13.944450 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Apr 13 19:19:13.952214 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Apr 13 19:19:13.965542 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Apr 13 19:19:13.971800 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Apr 13 19:19:13.972756 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Apr 13 19:19:13.983247 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 13 19:19:13.992548 disk-uuid[573]: Primary Header is updated. Apr 13 19:19:13.992548 disk-uuid[573]: Secondary Entries is updated. Apr 13 19:19:13.992548 disk-uuid[573]: Secondary Header is updated. Apr 13 19:19:14.001057 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 13 19:19:14.004039 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 13 19:19:14.008098 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 13 19:19:14.129105 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Apr 13 19:19:14.266900 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input1 Apr 13 19:19:14.267071 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Apr 13 19:19:14.267483 kernel: usbcore: registered new interface driver usbhid Apr 13 19:19:14.267515 kernel: usbhid: USB HID core driver Apr 13 19:19:14.375194 kernel: usb 1-2: new high-speed USB device number 3 using xhci_hcd Apr 13 19:19:14.515793 kernel: input: QEMU QEMU USB Keyboard as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-2/1-2:1.0/0003:0627:0001.0002/input/input2 Apr 13 19:19:14.569108 kernel: hid-generic 0003:0627:0001.0002: input,hidraw1: USB HID v1.11 Keyboard [QEMU QEMU USB Keyboard] on usb-0000:02:00.0-2/input0 Apr 13 19:19:15.013068 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 13 19:19:15.014068 disk-uuid[574]: The operation has completed successfully. Apr 13 19:19:15.070763 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 13 19:19:15.070874 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 13 19:19:15.081270 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 13 19:19:15.088666 sh[593]: Success Apr 13 19:19:15.103186 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Apr 13 19:19:15.160894 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 13 19:19:15.175262 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 13 19:19:15.176147 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 13 19:19:15.197452 kernel: BTRFS info (device dm-0): first mount of filesystem ed38fcff-9752-482a-82dd-c0f0fcf94cdd Apr 13 19:19:15.197531 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Apr 13 19:19:15.197555 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Apr 13 19:19:15.197579 kernel: BTRFS info (device dm-0): disabling log replay at mount time Apr 13 19:19:15.197600 kernel: BTRFS info (device dm-0): using free space tree Apr 13 19:19:15.205060 kernel: BTRFS info (device dm-0): enabling ssd optimizations Apr 13 19:19:15.207384 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 13 19:19:15.209060 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 13 19:19:15.214251 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 13 19:19:15.218159 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 13 19:19:15.228133 kernel: BTRFS info (device sda6): first mount of filesystem 82e51161-2104-45f8-9ecc-3d62852b78d3 Apr 13 19:19:15.228185 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Apr 13 19:19:15.228197 kernel: BTRFS info (device sda6): using free space tree Apr 13 19:19:15.233071 kernel: BTRFS info (device sda6): enabling ssd optimizations Apr 13 19:19:15.233136 kernel: BTRFS info (device sda6): auto enabling async discard Apr 13 19:19:15.243293 systemd[1]: mnt-oem.mount: Deactivated successfully. Apr 13 19:19:15.245092 kernel: BTRFS info (device sda6): last unmount of filesystem 82e51161-2104-45f8-9ecc-3d62852b78d3 Apr 13 19:19:15.256814 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 13 19:19:15.263371 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 13 19:19:15.365635 ignition[675]: Ignition 2.19.0 Apr 13 19:19:15.366854 ignition[675]: Stage: fetch-offline Apr 13 19:19:15.366915 ignition[675]: no configs at "/usr/lib/ignition/base.d" Apr 13 19:19:15.366925 ignition[675]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 13 19:19:15.367127 ignition[675]: parsed url from cmdline: "" Apr 13 19:19:15.367130 ignition[675]: no config URL provided Apr 13 19:19:15.367135 ignition[675]: reading system config file "/usr/lib/ignition/user.ign" Apr 13 19:19:15.367143 ignition[675]: no config at "/usr/lib/ignition/user.ign" Apr 13 19:19:15.371160 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 13 19:19:15.367148 ignition[675]: failed to fetch config: resource requires networking Apr 13 19:19:15.367344 ignition[675]: Ignition finished successfully Apr 13 19:19:15.375100 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 13 19:19:15.388333 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 13 19:19:15.412379 systemd-networkd[781]: lo: Link UP Apr 13 19:19:15.412388 systemd-networkd[781]: lo: Gained carrier Apr 13 19:19:15.416449 systemd-networkd[781]: Enumeration completed Apr 13 19:19:15.418224 systemd-networkd[781]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 13 19:19:15.418229 systemd-networkd[781]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 13 19:19:15.418251 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 13 19:19:15.419298 systemd[1]: Reached target network.target - Network. Apr 13 19:19:15.423220 systemd-networkd[781]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 13 19:19:15.423224 systemd-networkd[781]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 13 19:19:15.424485 systemd-networkd[781]: eth0: Link UP Apr 13 19:19:15.424511 systemd-networkd[781]: eth0: Gained carrier Apr 13 19:19:15.424532 systemd-networkd[781]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 13 19:19:15.431553 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Apr 13 19:19:15.433467 systemd-networkd[781]: eth1: Link UP Apr 13 19:19:15.433471 systemd-networkd[781]: eth1: Gained carrier Apr 13 19:19:15.433482 systemd-networkd[781]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 13 19:19:15.447218 ignition[783]: Ignition 2.19.0 Apr 13 19:19:15.447237 ignition[783]: Stage: fetch Apr 13 19:19:15.447479 ignition[783]: no configs at "/usr/lib/ignition/base.d" Apr 13 19:19:15.447489 ignition[783]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 13 19:19:15.447599 ignition[783]: parsed url from cmdline: "" Apr 13 19:19:15.447602 ignition[783]: no config URL provided Apr 13 19:19:15.447610 ignition[783]: reading system config file "/usr/lib/ignition/user.ign" Apr 13 19:19:15.447618 ignition[783]: no config at "/usr/lib/ignition/user.ign" Apr 13 19:19:15.447638 ignition[783]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 Apr 13 19:19:15.448460 ignition[783]: GET error: Get "http://169.254.169.254/hetzner/v1/userdata": dial tcp 169.254.169.254:80: connect: network is unreachable Apr 13 19:19:15.475136 systemd-networkd[781]: eth1: DHCPv4 address 10.0.0.3/32 acquired from 10.0.0.1 Apr 13 19:19:15.493226 systemd-networkd[781]: eth0: DHCPv4 address 178.105.12.165/32, gateway 172.31.1.1 acquired from 172.31.1.1 Apr 13 19:19:15.648578 ignition[783]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #2 Apr 13 19:19:15.657257 ignition[783]: GET result: OK Apr 13 19:19:15.657424 ignition[783]: parsing config with SHA512: e22f09fb0d31e8a6684f0f9d718b81c6b0d2d3c2312830ffc4a3d5fb7ec8fee5f678391a14a5934853db1a1247b41bbecdfab7dbde6e82316e7cc660cc81dfce Apr 13 19:19:15.665956 unknown[783]: fetched base config from "system" Apr 13 19:19:15.665965 unknown[783]: fetched base config from "system" Apr 13 19:19:15.665971 unknown[783]: fetched user config from "hetzner" Apr 13 19:19:15.667544 ignition[783]: fetch: fetch complete Apr 13 19:19:15.667550 ignition[783]: fetch: fetch passed Apr 13 19:19:15.667629 ignition[783]: Ignition finished successfully Apr 13 19:19:15.672477 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Apr 13 19:19:15.689362 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 13 19:19:15.708736 ignition[790]: Ignition 2.19.0 Apr 13 19:19:15.708749 ignition[790]: Stage: kargs Apr 13 19:19:15.708964 ignition[790]: no configs at "/usr/lib/ignition/base.d" Apr 13 19:19:15.708973 ignition[790]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 13 19:19:15.715352 ignition[790]: kargs: kargs passed Apr 13 19:19:15.715446 ignition[790]: Ignition finished successfully Apr 13 19:19:15.718570 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 13 19:19:15.726306 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 13 19:19:15.740760 ignition[797]: Ignition 2.19.0 Apr 13 19:19:15.740771 ignition[797]: Stage: disks Apr 13 19:19:15.740976 ignition[797]: no configs at "/usr/lib/ignition/base.d" Apr 13 19:19:15.740987 ignition[797]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 13 19:19:15.742190 ignition[797]: disks: disks passed Apr 13 19:19:15.744907 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 13 19:19:15.742257 ignition[797]: Ignition finished successfully Apr 13 19:19:15.745873 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 13 19:19:15.746698 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 13 19:19:15.747473 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 13 19:19:15.748614 systemd[1]: Reached target sysinit.target - System Initialization. Apr 13 19:19:15.749689 systemd[1]: Reached target basic.target - Basic System. Apr 13 19:19:15.762283 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 13 19:19:15.779452 systemd-fsck[806]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Apr 13 19:19:15.784950 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 13 19:19:15.793242 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 13 19:19:15.845067 kernel: EXT4-fs (sda9): mounted filesystem 775210d8-8fbf-4f17-be2d-56007930061c r/w with ordered data mode. Quota mode: none. Apr 13 19:19:15.845674 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 13 19:19:15.847690 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 13 19:19:15.856207 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 13 19:19:15.861323 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 13 19:19:15.864359 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Apr 13 19:19:15.864985 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 13 19:19:15.865043 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 13 19:19:15.874107 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by mount (814) Apr 13 19:19:15.876178 kernel: BTRFS info (device sda6): first mount of filesystem 82e51161-2104-45f8-9ecc-3d62852b78d3 Apr 13 19:19:15.876228 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Apr 13 19:19:15.876239 kernel: BTRFS info (device sda6): using free space tree Apr 13 19:19:15.883052 kernel: BTRFS info (device sda6): enabling ssd optimizations Apr 13 19:19:15.883123 kernel: BTRFS info (device sda6): auto enabling async discard Apr 13 19:19:15.886933 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 13 19:19:15.893357 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 13 19:19:15.897794 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 13 19:19:15.945926 coreos-metadata[816]: Apr 13 19:19:15.945 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 Apr 13 19:19:15.948262 coreos-metadata[816]: Apr 13 19:19:15.948 INFO Fetch successful Apr 13 19:19:15.948904 coreos-metadata[816]: Apr 13 19:19:15.948 INFO wrote hostname ci-4081-3-7-b-7ea64c4796 to /sysroot/etc/hostname Apr 13 19:19:15.952184 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Apr 13 19:19:15.954456 initrd-setup-root[841]: cut: /sysroot/etc/passwd: No such file or directory Apr 13 19:19:15.960297 initrd-setup-root[849]: cut: /sysroot/etc/group: No such file or directory Apr 13 19:19:15.965945 initrd-setup-root[856]: cut: /sysroot/etc/shadow: No such file or directory Apr 13 19:19:15.971487 initrd-setup-root[863]: cut: /sysroot/etc/gshadow: No such file or directory Apr 13 19:19:16.074714 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 13 19:19:16.086237 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 13 19:19:16.093281 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 13 19:19:16.100023 kernel: BTRFS info (device sda6): last unmount of filesystem 82e51161-2104-45f8-9ecc-3d62852b78d3 Apr 13 19:19:16.125530 ignition[931]: INFO : Ignition 2.19.0 Apr 13 19:19:16.127691 ignition[931]: INFO : Stage: mount Apr 13 19:19:16.127691 ignition[931]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 13 19:19:16.127691 ignition[931]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 13 19:19:16.128618 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 13 19:19:16.133735 ignition[931]: INFO : mount: mount passed Apr 13 19:19:16.133735 ignition[931]: INFO : Ignition finished successfully Apr 13 19:19:16.133870 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 13 19:19:16.145264 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 13 19:19:16.198124 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 13 19:19:16.206394 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 13 19:19:16.217185 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (942) Apr 13 19:19:16.219128 kernel: BTRFS info (device sda6): first mount of filesystem 82e51161-2104-45f8-9ecc-3d62852b78d3 Apr 13 19:19:16.219191 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Apr 13 19:19:16.219204 kernel: BTRFS info (device sda6): using free space tree Apr 13 19:19:16.222045 kernel: BTRFS info (device sda6): enabling ssd optimizations Apr 13 19:19:16.222120 kernel: BTRFS info (device sda6): auto enabling async discard Apr 13 19:19:16.225353 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 13 19:19:16.252473 ignition[960]: INFO : Ignition 2.19.0 Apr 13 19:19:16.252473 ignition[960]: INFO : Stage: files Apr 13 19:19:16.254331 ignition[960]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 13 19:19:16.254331 ignition[960]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 13 19:19:16.254331 ignition[960]: DEBUG : files: compiled without relabeling support, skipping Apr 13 19:19:16.259281 ignition[960]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 13 19:19:16.259281 ignition[960]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 13 19:19:16.259281 ignition[960]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 13 19:19:16.262635 ignition[960]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 13 19:19:16.262635 ignition[960]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 13 19:19:16.260509 unknown[960]: wrote ssh authorized keys file for user: core Apr 13 19:19:16.265876 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Apr 13 19:19:16.265876 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Apr 13 19:19:16.636230 systemd-networkd[781]: eth1: Gained IPv6LL Apr 13 19:19:17.467344 systemd-networkd[781]: eth0: Gained IPv6LL Apr 13 19:19:17.978459 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Apr 13 19:19:20.006920 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Apr 13 19:19:20.006920 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Apr 13 19:19:20.010083 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Apr 13 19:19:20.010083 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 13 19:19:20.010083 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 13 19:19:20.010083 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 13 19:19:20.010083 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 13 19:19:20.010083 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 13 19:19:20.010083 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 13 19:19:20.010083 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 13 19:19:20.010083 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 13 19:19:20.010083 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.4-arm64.raw" Apr 13 19:19:20.010083 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.4-arm64.raw" Apr 13 19:19:20.010083 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.4-arm64.raw" Apr 13 19:19:20.010083 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.4-arm64.raw: attempt #1 Apr 13 19:19:20.526053 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Apr 13 19:19:22.215121 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.4-arm64.raw" Apr 13 19:19:22.215121 ignition[960]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Apr 13 19:19:22.221955 ignition[960]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 13 19:19:22.221955 ignition[960]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 13 19:19:22.221955 ignition[960]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Apr 13 19:19:22.221955 ignition[960]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Apr 13 19:19:22.221955 ignition[960]: INFO : files: op(d): op(e): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Apr 13 19:19:22.221955 ignition[960]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Apr 13 19:19:22.221955 ignition[960]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Apr 13 19:19:22.221955 ignition[960]: INFO : files: op(f): [started] setting preset to enabled for "prepare-helm.service" Apr 13 19:19:22.221955 ignition[960]: INFO : files: op(f): [finished] setting preset to enabled for "prepare-helm.service" Apr 13 19:19:22.221955 ignition[960]: INFO : files: createResultFile: createFiles: op(10): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 13 19:19:22.221955 ignition[960]: INFO : files: createResultFile: createFiles: op(10): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 13 19:19:22.221955 ignition[960]: INFO : files: files passed Apr 13 19:19:22.221955 ignition[960]: INFO : Ignition finished successfully Apr 13 19:19:22.223576 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 13 19:19:22.237260 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 13 19:19:22.242217 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 13 19:19:22.247884 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 13 19:19:22.248055 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 13 19:19:22.262967 initrd-setup-root-after-ignition[988]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 13 19:19:22.262967 initrd-setup-root-after-ignition[988]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 13 19:19:22.266507 initrd-setup-root-after-ignition[992]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 13 19:19:22.269665 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 13 19:19:22.271288 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 13 19:19:22.276292 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 13 19:19:22.322627 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 13 19:19:22.322754 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 13 19:19:22.327563 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 13 19:19:22.330650 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 13 19:19:22.331385 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 13 19:19:22.338341 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 13 19:19:22.365618 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 13 19:19:22.375385 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 13 19:19:22.388672 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 13 19:19:22.389553 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 13 19:19:22.391802 systemd[1]: Stopped target timers.target - Timer Units. Apr 13 19:19:22.393060 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 13 19:19:22.393202 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 13 19:19:22.394904 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 13 19:19:22.395691 systemd[1]: Stopped target basic.target - Basic System. Apr 13 19:19:22.397078 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 13 19:19:22.398281 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 13 19:19:22.399431 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 13 19:19:22.400669 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 13 19:19:22.401781 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 13 19:19:22.403106 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 13 19:19:22.404237 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 13 19:19:22.405410 systemd[1]: Stopped target swap.target - Swaps. Apr 13 19:19:22.406402 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 13 19:19:22.406532 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 13 19:19:22.407877 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 13 19:19:22.409117 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 13 19:19:22.410323 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 13 19:19:22.412100 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 13 19:19:22.412831 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 13 19:19:22.412959 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 13 19:19:22.414766 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 13 19:19:22.414895 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 13 19:19:22.416300 systemd[1]: ignition-files.service: Deactivated successfully. Apr 13 19:19:22.416417 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 13 19:19:22.417562 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Apr 13 19:19:22.417666 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Apr 13 19:19:22.424358 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 13 19:19:22.427529 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 13 19:19:22.427702 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 13 19:19:22.433769 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 13 19:19:22.434680 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 13 19:19:22.434875 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 13 19:19:22.440351 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 13 19:19:22.440529 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 13 19:19:22.448195 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 13 19:19:22.449948 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 13 19:19:22.450779 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 13 19:19:22.454723 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 13 19:19:22.455492 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 13 19:19:22.459870 ignition[1012]: INFO : Ignition 2.19.0 Apr 13 19:19:22.459870 ignition[1012]: INFO : Stage: umount Apr 13 19:19:22.461063 ignition[1012]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 13 19:19:22.461063 ignition[1012]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 13 19:19:22.463189 ignition[1012]: INFO : umount: umount passed Apr 13 19:19:22.463189 ignition[1012]: INFO : Ignition finished successfully Apr 13 19:19:22.464298 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 13 19:19:22.464454 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 13 19:19:22.465310 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 13 19:19:22.465357 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 13 19:19:22.466269 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 13 19:19:22.466385 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 13 19:19:22.467295 systemd[1]: ignition-fetch.service: Deactivated successfully. Apr 13 19:19:22.467334 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Apr 13 19:19:22.468296 systemd[1]: Stopped target network.target - Network. Apr 13 19:19:22.469238 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 13 19:19:22.469294 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 13 19:19:22.470529 systemd[1]: Stopped target paths.target - Path Units. Apr 13 19:19:22.471518 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 13 19:19:22.476103 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 13 19:19:22.477214 systemd[1]: Stopped target slices.target - Slice Units. Apr 13 19:19:22.477866 systemd[1]: Stopped target sockets.target - Socket Units. Apr 13 19:19:22.478736 systemd[1]: iscsid.socket: Deactivated successfully. Apr 13 19:19:22.478794 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 13 19:19:22.480612 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 13 19:19:22.480653 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 13 19:19:22.483554 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 13 19:19:22.483626 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 13 19:19:22.484577 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 13 19:19:22.484619 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 13 19:19:22.485919 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 13 19:19:22.486113 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 13 19:19:22.487295 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 13 19:19:22.488548 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 13 19:19:22.492287 systemd-networkd[781]: eth0: DHCPv6 lease lost Apr 13 19:19:22.496122 systemd-networkd[781]: eth1: DHCPv6 lease lost Apr 13 19:19:22.498551 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 13 19:19:22.498941 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 13 19:19:22.500218 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 13 19:19:22.500312 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 13 19:19:22.507185 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 13 19:19:22.507684 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 13 19:19:22.507750 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 13 19:19:22.510067 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 13 19:19:22.513908 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 13 19:19:22.514046 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 13 19:19:22.526690 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 13 19:19:22.528513 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 13 19:19:22.531673 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 13 19:19:22.531828 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 13 19:19:22.534298 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 13 19:19:22.534381 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 13 19:19:22.535814 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 13 19:19:22.535849 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 13 19:19:22.536861 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 13 19:19:22.536913 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 13 19:19:22.538512 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 13 19:19:22.538563 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 13 19:19:22.540320 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 13 19:19:22.540375 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 13 19:19:22.552690 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 13 19:19:22.556317 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 13 19:19:22.556437 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 13 19:19:22.557119 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 13 19:19:22.557168 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 13 19:19:22.557769 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 13 19:19:22.557807 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 13 19:19:22.560413 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 13 19:19:22.560475 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 13 19:19:22.561333 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 13 19:19:22.561380 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 13 19:19:22.562432 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 13 19:19:22.563408 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 13 19:19:22.564798 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 13 19:19:22.573686 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 13 19:19:22.586238 systemd[1]: Switching root. Apr 13 19:19:22.626775 systemd-journald[236]: Journal stopped Apr 13 19:19:23.636470 systemd-journald[236]: Received SIGTERM from PID 1 (systemd). Apr 13 19:19:23.636560 kernel: SELinux: policy capability network_peer_controls=1 Apr 13 19:19:23.636573 kernel: SELinux: policy capability open_perms=1 Apr 13 19:19:23.636583 kernel: SELinux: policy capability extended_socket_class=1 Apr 13 19:19:23.636595 kernel: SELinux: policy capability always_check_network=0 Apr 13 19:19:23.636605 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 13 19:19:23.636621 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 13 19:19:23.636630 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 13 19:19:23.636639 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 13 19:19:23.636649 kernel: audit: type=1403 audit(1776107962.778:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Apr 13 19:19:23.636660 systemd[1]: Successfully loaded SELinux policy in 38.699ms. Apr 13 19:19:23.636686 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 11.709ms. Apr 13 19:19:23.636697 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 13 19:19:23.636708 systemd[1]: Detected virtualization kvm. Apr 13 19:19:23.636720 systemd[1]: Detected architecture arm64. Apr 13 19:19:23.636730 systemd[1]: Detected first boot. Apr 13 19:19:23.636741 systemd[1]: Hostname set to . Apr 13 19:19:23.636751 systemd[1]: Initializing machine ID from VM UUID. Apr 13 19:19:23.636761 zram_generator::config[1055]: No configuration found. Apr 13 19:19:23.636772 systemd[1]: Populated /etc with preset unit settings. Apr 13 19:19:23.636784 systemd[1]: initrd-switch-root.service: Deactivated successfully. Apr 13 19:19:23.636794 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Apr 13 19:19:23.636806 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Apr 13 19:19:23.636817 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Apr 13 19:19:23.636828 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Apr 13 19:19:23.636838 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Apr 13 19:19:23.636848 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Apr 13 19:19:23.636858 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Apr 13 19:19:23.636873 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Apr 13 19:19:23.636883 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Apr 13 19:19:23.636895 systemd[1]: Created slice user.slice - User and Session Slice. Apr 13 19:19:23.636905 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 13 19:19:23.636916 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 13 19:19:23.636927 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Apr 13 19:19:23.636938 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Apr 13 19:19:23.636948 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Apr 13 19:19:23.636959 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 13 19:19:23.636969 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Apr 13 19:19:23.636999 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 13 19:19:23.637068 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Apr 13 19:19:23.637080 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Apr 13 19:19:23.637091 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Apr 13 19:19:23.637101 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Apr 13 19:19:23.637111 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 13 19:19:23.637126 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 13 19:19:23.637136 systemd[1]: Reached target slices.target - Slice Units. Apr 13 19:19:23.637149 systemd[1]: Reached target swap.target - Swaps. Apr 13 19:19:23.637159 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Apr 13 19:19:23.637169 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Apr 13 19:19:23.637180 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 13 19:19:23.637191 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 13 19:19:23.637201 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 13 19:19:23.637211 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Apr 13 19:19:23.637222 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Apr 13 19:19:23.637232 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Apr 13 19:19:23.637244 systemd[1]: Mounting media.mount - External Media Directory... Apr 13 19:19:23.637255 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Apr 13 19:19:23.637270 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Apr 13 19:19:23.637281 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Apr 13 19:19:23.637292 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 13 19:19:23.637303 systemd[1]: Reached target machines.target - Containers. Apr 13 19:19:23.637313 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Apr 13 19:19:23.637323 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 13 19:19:23.637336 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 13 19:19:23.637348 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Apr 13 19:19:23.637361 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 13 19:19:23.637374 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 13 19:19:23.637384 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 13 19:19:23.637395 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Apr 13 19:19:23.637407 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 13 19:19:23.637418 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 13 19:19:23.637429 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Apr 13 19:19:23.637439 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Apr 13 19:19:23.637450 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Apr 13 19:19:23.637461 systemd[1]: Stopped systemd-fsck-usr.service. Apr 13 19:19:23.637471 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 13 19:19:23.637482 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 13 19:19:23.637493 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 13 19:19:23.637505 kernel: loop: module loaded Apr 13 19:19:23.637516 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Apr 13 19:19:23.637527 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 13 19:19:23.637537 systemd[1]: verity-setup.service: Deactivated successfully. Apr 13 19:19:23.637549 systemd[1]: Stopped verity-setup.service. Apr 13 19:19:23.637559 kernel: ACPI: bus type drm_connector registered Apr 13 19:19:23.637569 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Apr 13 19:19:23.637585 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Apr 13 19:19:23.637601 systemd[1]: Mounted media.mount - External Media Directory. Apr 13 19:19:23.637623 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Apr 13 19:19:23.637636 kernel: fuse: init (API version 7.39) Apr 13 19:19:23.637645 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Apr 13 19:19:23.637656 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Apr 13 19:19:23.637669 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 13 19:19:23.637680 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 13 19:19:23.637690 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Apr 13 19:19:23.637701 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 13 19:19:23.637716 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 13 19:19:23.637728 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 13 19:19:23.637752 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 13 19:19:23.637811 systemd-journald[1122]: Collecting audit messages is disabled. Apr 13 19:19:23.637843 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 13 19:19:23.637854 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 13 19:19:23.637867 systemd-journald[1122]: Journal started Apr 13 19:19:23.637890 systemd-journald[1122]: Runtime Journal (/run/log/journal/4ec0a2a2a9864b568f1a40519af511fc) is 8.0M, max 76.6M, 68.6M free. Apr 13 19:19:23.327261 systemd[1]: Queued start job for default target multi-user.target. Apr 13 19:19:23.353477 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Apr 13 19:19:23.353904 systemd[1]: systemd-journald.service: Deactivated successfully. Apr 13 19:19:23.641031 systemd[1]: Started systemd-journald.service - Journal Service. Apr 13 19:19:23.641380 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 13 19:19:23.642141 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Apr 13 19:19:23.644461 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Apr 13 19:19:23.645467 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 13 19:19:23.645619 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 13 19:19:23.646614 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 13 19:19:23.647548 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 13 19:19:23.648548 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Apr 13 19:19:23.662671 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 13 19:19:23.669291 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Apr 13 19:19:23.677254 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Apr 13 19:19:23.680137 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 13 19:19:23.680182 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 13 19:19:23.681851 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Apr 13 19:19:23.705259 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Apr 13 19:19:23.709456 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Apr 13 19:19:23.710350 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 13 19:19:23.715289 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Apr 13 19:19:23.721620 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Apr 13 19:19:23.722724 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 13 19:19:23.725391 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Apr 13 19:19:23.727056 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 13 19:19:23.730890 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 13 19:19:23.737501 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Apr 13 19:19:23.745215 systemd[1]: Starting systemd-sysusers.service - Create System Users... Apr 13 19:19:23.756385 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 13 19:19:23.757959 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Apr 13 19:19:23.762195 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Apr 13 19:19:23.764482 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Apr 13 19:19:23.765930 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Apr 13 19:19:23.773035 kernel: loop0: detected capacity change from 0 to 200864 Apr 13 19:19:23.774615 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Apr 13 19:19:23.782417 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Apr 13 19:19:23.786167 systemd-journald[1122]: Time spent on flushing to /var/log/journal/4ec0a2a2a9864b568f1a40519af511fc is 31.280ms for 1129 entries. Apr 13 19:19:23.786167 systemd-journald[1122]: System Journal (/var/log/journal/4ec0a2a2a9864b568f1a40519af511fc) is 8.0M, max 584.8M, 576.8M free. Apr 13 19:19:23.830412 systemd-journald[1122]: Received client request to flush runtime journal. Apr 13 19:19:23.830470 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Apr 13 19:19:23.791438 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Apr 13 19:19:23.839469 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Apr 13 19:19:23.853948 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 13 19:19:23.861589 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Apr 13 19:19:23.864729 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 13 19:19:23.870950 udevadm[1178]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Apr 13 19:19:23.873043 kernel: loop1: detected capacity change from 0 to 8 Apr 13 19:19:23.893556 systemd[1]: Finished systemd-sysusers.service - Create System Users. Apr 13 19:19:23.902311 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 13 19:19:23.906588 kernel: loop2: detected capacity change from 0 to 114432 Apr 13 19:19:23.953477 kernel: loop3: detected capacity change from 0 to 114328 Apr 13 19:19:23.955849 systemd-tmpfiles[1190]: ACLs are not supported, ignoring. Apr 13 19:19:23.955872 systemd-tmpfiles[1190]: ACLs are not supported, ignoring. Apr 13 19:19:23.966539 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 13 19:19:23.990042 kernel: loop4: detected capacity change from 0 to 200864 Apr 13 19:19:24.017072 kernel: loop5: detected capacity change from 0 to 8 Apr 13 19:19:24.019132 kernel: loop6: detected capacity change from 0 to 114432 Apr 13 19:19:24.028039 kernel: loop7: detected capacity change from 0 to 114328 Apr 13 19:19:24.040371 (sd-merge)[1196]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. Apr 13 19:19:24.042665 (sd-merge)[1196]: Merged extensions into '/usr'. Apr 13 19:19:24.048235 systemd[1]: Reloading requested from client PID 1169 ('systemd-sysext') (unit systemd-sysext.service)... Apr 13 19:19:24.048265 systemd[1]: Reloading... Apr 13 19:19:24.218054 zram_generator::config[1222]: No configuration found. Apr 13 19:19:24.341089 ldconfig[1164]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 13 19:19:24.395235 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 13 19:19:24.444317 systemd[1]: Reloading finished in 394 ms. Apr 13 19:19:24.475584 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Apr 13 19:19:24.481090 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Apr 13 19:19:24.491827 systemd[1]: Starting ensure-sysext.service... Apr 13 19:19:24.494299 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 13 19:19:24.510272 systemd[1]: Reloading requested from client PID 1259 ('systemctl') (unit ensure-sysext.service)... Apr 13 19:19:24.510294 systemd[1]: Reloading... Apr 13 19:19:24.533262 systemd-tmpfiles[1260]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 13 19:19:24.533531 systemd-tmpfiles[1260]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Apr 13 19:19:24.534873 systemd-tmpfiles[1260]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 13 19:19:24.535269 systemd-tmpfiles[1260]: ACLs are not supported, ignoring. Apr 13 19:19:24.535321 systemd-tmpfiles[1260]: ACLs are not supported, ignoring. Apr 13 19:19:24.538750 systemd-tmpfiles[1260]: Detected autofs mount point /boot during canonicalization of boot. Apr 13 19:19:24.538925 systemd-tmpfiles[1260]: Skipping /boot Apr 13 19:19:24.555144 systemd-tmpfiles[1260]: Detected autofs mount point /boot during canonicalization of boot. Apr 13 19:19:24.555157 systemd-tmpfiles[1260]: Skipping /boot Apr 13 19:19:24.595048 zram_generator::config[1290]: No configuration found. Apr 13 19:19:24.708181 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 13 19:19:24.755456 systemd[1]: Reloading finished in 244 ms. Apr 13 19:19:24.775058 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Apr 13 19:19:24.776207 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 13 19:19:24.794290 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 13 19:19:24.797418 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Apr 13 19:19:24.802204 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Apr 13 19:19:24.812331 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 13 19:19:24.815258 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 13 19:19:24.819285 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Apr 13 19:19:24.825626 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 13 19:19:24.834494 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 13 19:19:24.840748 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 13 19:19:24.844392 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 13 19:19:24.846279 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 13 19:19:24.862607 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Apr 13 19:19:24.864369 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Apr 13 19:19:24.875516 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 13 19:19:24.877131 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 13 19:19:24.881317 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 13 19:19:24.881473 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 13 19:19:24.887887 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 13 19:19:24.888942 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 13 19:19:24.889176 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 13 19:19:24.899934 systemd[1]: Starting systemd-update-done.service - Update is Completed... Apr 13 19:19:24.903510 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Apr 13 19:19:24.913604 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Apr 13 19:19:24.914939 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 13 19:19:24.915238 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 13 19:19:24.920630 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 13 19:19:24.928287 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 13 19:19:24.933366 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 13 19:19:24.937409 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 13 19:19:24.938862 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 13 19:19:24.939161 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 13 19:19:24.941101 systemd[1]: Finished systemd-update-done.service - Update is Completed. Apr 13 19:19:24.942872 systemd[1]: Started systemd-userdbd.service - User Database Manager. Apr 13 19:19:24.948632 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 13 19:19:24.948861 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 13 19:19:24.966073 systemd[1]: Finished ensure-sysext.service. Apr 13 19:19:24.968207 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 13 19:19:24.968406 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 13 19:19:24.970400 systemd-udevd[1332]: Using default interface naming scheme 'v255'. Apr 13 19:19:24.975288 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 13 19:19:24.975376 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 13 19:19:24.981375 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Apr 13 19:19:24.985776 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 13 19:19:24.985999 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 13 19:19:25.000334 augenrules[1372]: No rules Apr 13 19:19:25.003427 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 13 19:19:25.012236 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 13 19:19:25.024784 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 13 19:19:25.120338 systemd-resolved[1331]: Positive Trust Anchors: Apr 13 19:19:25.120361 systemd-resolved[1331]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 13 19:19:25.120394 systemd-resolved[1331]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 13 19:19:25.128536 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Apr 13 19:19:25.129783 systemd-resolved[1331]: Using system hostname 'ci-4081-3-7-b-7ea64c4796'. Apr 13 19:19:25.134042 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 13 19:19:25.135132 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 13 19:19:25.170693 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Apr 13 19:19:25.172145 systemd[1]: Reached target time-set.target - System Time Set. Apr 13 19:19:25.190518 systemd-networkd[1382]: lo: Link UP Apr 13 19:19:25.190532 systemd-networkd[1382]: lo: Gained carrier Apr 13 19:19:25.193222 systemd-networkd[1382]: Enumeration completed Apr 13 19:19:25.193363 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 13 19:19:25.194178 systemd[1]: Reached target network.target - Network. Apr 13 19:19:25.197517 systemd-networkd[1382]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 13 19:19:25.197532 systemd-networkd[1382]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 13 19:19:25.199134 systemd-networkd[1382]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 13 19:19:25.199148 systemd-networkd[1382]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 13 19:19:25.199929 systemd-networkd[1382]: eth0: Link UP Apr 13 19:19:25.199934 systemd-networkd[1382]: eth0: Gained carrier Apr 13 19:19:25.199951 systemd-networkd[1382]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 13 19:19:25.208476 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Apr 13 19:19:25.216461 systemd-networkd[1382]: eth1: Link UP Apr 13 19:19:25.216477 systemd-networkd[1382]: eth1: Gained carrier Apr 13 19:19:25.216500 systemd-networkd[1382]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 13 19:19:25.227913 systemd-networkd[1382]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 13 19:19:25.238370 systemd-networkd[1382]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 13 19:19:25.251043 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 33 scanned by (udev-worker) (1388) Apr 13 19:19:25.259132 systemd-networkd[1382]: eth1: DHCPv4 address 10.0.0.3/32 acquired from 10.0.0.1 Apr 13 19:19:25.261147 systemd-timesyncd[1368]: Network configuration changed, trying to establish connection. Apr 13 19:19:25.272235 systemd-networkd[1382]: eth0: DHCPv4 address 178.105.12.165/32, gateway 172.31.1.1 acquired from 172.31.1.1 Apr 13 19:19:25.272550 systemd-timesyncd[1368]: Network configuration changed, trying to establish connection. Apr 13 19:19:25.276189 systemd-timesyncd[1368]: Network configuration changed, trying to establish connection. Apr 13 19:19:25.288042 kernel: mousedev: PS/2 mouse device common for all mice Apr 13 19:19:25.294793 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Apr 13 19:19:25.304320 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Apr 13 19:19:25.338249 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Apr 13 19:19:25.387513 systemd[1]: Condition check resulted in dev-virtio\x2dports-org.qemu.guest_agent.0.device - /dev/virtio-ports/org.qemu.guest_agent.0 being skipped. Apr 13 19:19:25.388359 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 13 19:19:25.398372 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 13 19:19:25.402408 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 13 19:19:25.407392 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 13 19:19:25.409226 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 13 19:19:25.409268 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 13 19:19:25.409640 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 13 19:19:25.412071 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 13 19:19:25.425468 kernel: [drm] pci: virtio-gpu-pci detected at 0000:00:01.0 Apr 13 19:19:25.425675 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Apr 13 19:19:25.425696 kernel: [drm] features: -context_init Apr 13 19:19:25.427622 kernel: [drm] number of scanouts: 1 Apr 13 19:19:25.427706 kernel: [drm] number of cap sets: 0 Apr 13 19:19:25.428239 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 13 19:19:25.442753 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 13 19:19:25.444116 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 13 19:19:25.446628 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 13 19:19:25.449019 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:01.0 on minor 0 Apr 13 19:19:25.451753 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 13 19:19:25.452608 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 13 19:19:25.453085 kernel: Console: switching to colour frame buffer device 160x50 Apr 13 19:19:25.472409 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Apr 13 19:19:25.479955 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 13 19:19:25.482358 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 13 19:19:25.482591 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 13 19:19:25.491349 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 13 19:19:25.554521 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 13 19:19:25.559529 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Apr 13 19:19:25.568371 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Apr 13 19:19:25.586556 lvm[1443]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 13 19:19:25.615348 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Apr 13 19:19:25.617833 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 13 19:19:25.618802 systemd[1]: Reached target sysinit.target - System Initialization. Apr 13 19:19:25.619722 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Apr 13 19:19:25.620663 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Apr 13 19:19:25.621860 systemd[1]: Started logrotate.timer - Daily rotation of log files. Apr 13 19:19:25.622744 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Apr 13 19:19:25.623652 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Apr 13 19:19:25.624398 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 13 19:19:25.624435 systemd[1]: Reached target paths.target - Path Units. Apr 13 19:19:25.624954 systemd[1]: Reached target timers.target - Timer Units. Apr 13 19:19:25.626913 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Apr 13 19:19:25.630511 systemd[1]: Starting docker.socket - Docker Socket for the API... Apr 13 19:19:25.637601 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Apr 13 19:19:25.641582 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Apr 13 19:19:25.643290 systemd[1]: Listening on docker.socket - Docker Socket for the API. Apr 13 19:19:25.644142 systemd[1]: Reached target sockets.target - Socket Units. Apr 13 19:19:25.644677 systemd[1]: Reached target basic.target - Basic System. Apr 13 19:19:25.645637 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Apr 13 19:19:25.645669 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Apr 13 19:19:25.651623 systemd[1]: Starting containerd.service - containerd container runtime... Apr 13 19:19:25.657365 lvm[1447]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 13 19:19:25.666343 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Apr 13 19:19:25.672087 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Apr 13 19:19:25.679782 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Apr 13 19:19:25.684950 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Apr 13 19:19:25.687148 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Apr 13 19:19:25.692274 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Apr 13 19:19:25.699166 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Apr 13 19:19:25.704267 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. Apr 13 19:19:25.712445 jq[1451]: false Apr 13 19:19:25.709124 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Apr 13 19:19:25.714270 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Apr 13 19:19:25.721393 systemd[1]: Starting systemd-logind.service - User Login Management... Apr 13 19:19:25.723883 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Apr 13 19:19:25.725244 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Apr 13 19:19:25.727279 systemd[1]: Starting update-engine.service - Update Engine... Apr 13 19:19:25.729902 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Apr 13 19:19:25.735094 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Apr 13 19:19:25.737436 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 13 19:19:25.737631 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Apr 13 19:19:25.753523 extend-filesystems[1454]: Found loop4 Apr 13 19:19:25.753523 extend-filesystems[1454]: Found loop5 Apr 13 19:19:25.760561 extend-filesystems[1454]: Found loop6 Apr 13 19:19:25.760561 extend-filesystems[1454]: Found loop7 Apr 13 19:19:25.760561 extend-filesystems[1454]: Found sda Apr 13 19:19:25.760561 extend-filesystems[1454]: Found sda1 Apr 13 19:19:25.760561 extend-filesystems[1454]: Found sda2 Apr 13 19:19:25.760561 extend-filesystems[1454]: Found sda3 Apr 13 19:19:25.760561 extend-filesystems[1454]: Found usr Apr 13 19:19:25.760561 extend-filesystems[1454]: Found sda4 Apr 13 19:19:25.760561 extend-filesystems[1454]: Found sda6 Apr 13 19:19:25.760561 extend-filesystems[1454]: Found sda7 Apr 13 19:19:25.760561 extend-filesystems[1454]: Found sda9 Apr 13 19:19:25.760561 extend-filesystems[1454]: Checking size of /dev/sda9 Apr 13 19:19:25.817239 extend-filesystems[1454]: Resized partition /dev/sda9 Apr 13 19:19:25.770503 systemd[1]: motdgen.service: Deactivated successfully. Apr 13 19:19:25.826372 jq[1465]: true Apr 13 19:19:25.826616 extend-filesystems[1487]: resize2fs 1.47.1 (20-May-2024) Apr 13 19:19:25.770740 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Apr 13 19:19:25.835386 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 9393147 blocks Apr 13 19:19:25.823357 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 13 19:19:25.823534 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Apr 13 19:19:25.835763 tar[1475]: linux-arm64/LICENSE Apr 13 19:19:25.835763 tar[1475]: linux-arm64/helm Apr 13 19:19:25.836131 jq[1481]: true Apr 13 19:19:25.836815 dbus-daemon[1450]: [system] SELinux support is enabled Apr 13 19:19:25.838176 systemd[1]: Started dbus.service - D-Bus System Message Bus. Apr 13 19:19:25.846324 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 13 19:19:25.846388 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Apr 13 19:19:25.852363 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 13 19:19:25.852397 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Apr 13 19:19:25.853560 coreos-metadata[1449]: Apr 13 19:19:25.853 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 Apr 13 19:19:25.858420 update_engine[1463]: I20260413 19:19:25.857686 1463 main.cc:92] Flatcar Update Engine starting Apr 13 19:19:25.859411 coreos-metadata[1449]: Apr 13 19:19:25.859 INFO Fetch successful Apr 13 19:19:25.862037 coreos-metadata[1449]: Apr 13 19:19:25.861 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 Apr 13 19:19:25.864056 coreos-metadata[1449]: Apr 13 19:19:25.862 INFO Fetch successful Apr 13 19:19:25.870845 systemd[1]: Started update-engine.service - Update Engine. Apr 13 19:19:25.871997 update_engine[1463]: I20260413 19:19:25.871537 1463 update_check_scheduler.cc:74] Next update check in 6m48s Apr 13 19:19:25.883349 systemd[1]: Started locksmithd.service - Cluster reboot manager. Apr 13 19:19:25.887138 (ntainerd)[1489]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Apr 13 19:19:26.030039 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 33 scanned by (udev-worker) (1387) Apr 13 19:19:26.032329 kernel: EXT4-fs (sda9): resized filesystem to 9393147 Apr 13 19:19:26.052065 systemd-logind[1461]: New seat seat0. Apr 13 19:19:26.073662 extend-filesystems[1487]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Apr 13 19:19:26.073662 extend-filesystems[1487]: old_desc_blocks = 1, new_desc_blocks = 5 Apr 13 19:19:26.073662 extend-filesystems[1487]: The filesystem on /dev/sda9 is now 9393147 (4k) blocks long. Apr 13 19:19:26.082872 extend-filesystems[1454]: Resized filesystem in /dev/sda9 Apr 13 19:19:26.082872 extend-filesystems[1454]: Found sr0 Apr 13 19:19:26.084483 bash[1512]: Updated "/home/core/.ssh/authorized_keys" Apr 13 19:19:26.076601 systemd-logind[1461]: Watching system buttons on /dev/input/event0 (Power Button) Apr 13 19:19:26.076616 systemd-logind[1461]: Watching system buttons on /dev/input/event2 (QEMU QEMU USB Keyboard) Apr 13 19:19:26.078220 systemd[1]: Started systemd-logind.service - User Login Management. Apr 13 19:19:26.083112 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Apr 13 19:19:26.084527 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 13 19:19:26.087460 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Apr 13 19:19:26.095870 locksmithd[1497]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 13 19:19:26.109409 systemd[1]: Starting sshkeys.service... Apr 13 19:19:26.112146 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Apr 13 19:19:26.113326 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Apr 13 19:19:26.129410 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Apr 13 19:19:26.136425 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Apr 13 19:19:26.197531 coreos-metadata[1534]: Apr 13 19:19:26.197 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 Apr 13 19:19:26.201698 coreos-metadata[1534]: Apr 13 19:19:26.201 INFO Fetch successful Apr 13 19:19:26.204374 unknown[1534]: wrote ssh authorized keys file for user: core Apr 13 19:19:26.243574 update-ssh-keys[1538]: Updated "/home/core/.ssh/authorized_keys" Apr 13 19:19:26.244409 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Apr 13 19:19:26.252076 systemd[1]: Finished sshkeys.service. Apr 13 19:19:26.303253 containerd[1489]: time="2026-04-13T19:19:26.302778160Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Apr 13 19:19:26.382268 containerd[1489]: time="2026-04-13T19:19:26.381393080Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Apr 13 19:19:26.385315 containerd[1489]: time="2026-04-13T19:19:26.385260360Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Apr 13 19:19:26.386081 containerd[1489]: time="2026-04-13T19:19:26.386055920Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Apr 13 19:19:26.386167 containerd[1489]: time="2026-04-13T19:19:26.386152880Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Apr 13 19:19:26.387411 containerd[1489]: time="2026-04-13T19:19:26.386382840Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Apr 13 19:19:26.387411 containerd[1489]: time="2026-04-13T19:19:26.386408040Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Apr 13 19:19:26.387411 containerd[1489]: time="2026-04-13T19:19:26.386487520Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Apr 13 19:19:26.387411 containerd[1489]: time="2026-04-13T19:19:26.386501520Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Apr 13 19:19:26.387411 containerd[1489]: time="2026-04-13T19:19:26.386681080Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 13 19:19:26.387411 containerd[1489]: time="2026-04-13T19:19:26.386696120Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Apr 13 19:19:26.387411 containerd[1489]: time="2026-04-13T19:19:26.386709120Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Apr 13 19:19:26.387411 containerd[1489]: time="2026-04-13T19:19:26.386721000Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Apr 13 19:19:26.387411 containerd[1489]: time="2026-04-13T19:19:26.386843640Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Apr 13 19:19:26.390034 containerd[1489]: time="2026-04-13T19:19:26.389841560Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Apr 13 19:19:26.390196 containerd[1489]: time="2026-04-13T19:19:26.390175320Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 13 19:19:26.390248 containerd[1489]: time="2026-04-13T19:19:26.390235600Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Apr 13 19:19:26.390409 containerd[1489]: time="2026-04-13T19:19:26.390393000Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Apr 13 19:19:26.390583 containerd[1489]: time="2026-04-13T19:19:26.390567360Z" level=info msg="metadata content store policy set" policy=shared Apr 13 19:19:26.400880 containerd[1489]: time="2026-04-13T19:19:26.400776360Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Apr 13 19:19:26.401142 containerd[1489]: time="2026-04-13T19:19:26.401118280Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Apr 13 19:19:26.401273 containerd[1489]: time="2026-04-13T19:19:26.401255480Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Apr 13 19:19:26.402391 containerd[1489]: time="2026-04-13T19:19:26.401544320Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Apr 13 19:19:26.402391 containerd[1489]: time="2026-04-13T19:19:26.401568160Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Apr 13 19:19:26.402391 containerd[1489]: time="2026-04-13T19:19:26.401753720Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Apr 13 19:19:26.402540 containerd[1489]: time="2026-04-13T19:19:26.402429440Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Apr 13 19:19:26.402661 containerd[1489]: time="2026-04-13T19:19:26.402624480Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Apr 13 19:19:26.402697 containerd[1489]: time="2026-04-13T19:19:26.402658200Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Apr 13 19:19:26.402697 containerd[1489]: time="2026-04-13T19:19:26.402678800Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Apr 13 19:19:26.402744 containerd[1489]: time="2026-04-13T19:19:26.402698120Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Apr 13 19:19:26.402744 containerd[1489]: time="2026-04-13T19:19:26.402717440Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Apr 13 19:19:26.402744 containerd[1489]: time="2026-04-13T19:19:26.402734800Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Apr 13 19:19:26.402792 containerd[1489]: time="2026-04-13T19:19:26.402754920Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Apr 13 19:19:26.402792 containerd[1489]: time="2026-04-13T19:19:26.402772120Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Apr 13 19:19:26.402832 containerd[1489]: time="2026-04-13T19:19:26.402790240Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Apr 13 19:19:26.402832 containerd[1489]: time="2026-04-13T19:19:26.402806840Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Apr 13 19:19:26.402865 containerd[1489]: time="2026-04-13T19:19:26.402830800Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Apr 13 19:19:26.402865 containerd[1489]: time="2026-04-13T19:19:26.402857360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Apr 13 19:19:26.402897 containerd[1489]: time="2026-04-13T19:19:26.402878320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Apr 13 19:19:26.402918 containerd[1489]: time="2026-04-13T19:19:26.402895520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Apr 13 19:19:26.404756 containerd[1489]: time="2026-04-13T19:19:26.403000600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Apr 13 19:19:26.404756 containerd[1489]: time="2026-04-13T19:19:26.404067560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Apr 13 19:19:26.404756 containerd[1489]: time="2026-04-13T19:19:26.404105240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Apr 13 19:19:26.404756 containerd[1489]: time="2026-04-13T19:19:26.404129160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Apr 13 19:19:26.404756 containerd[1489]: time="2026-04-13T19:19:26.404146240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Apr 13 19:19:26.404756 containerd[1489]: time="2026-04-13T19:19:26.404164520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Apr 13 19:19:26.404756 containerd[1489]: time="2026-04-13T19:19:26.404186000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Apr 13 19:19:26.404756 containerd[1489]: time="2026-04-13T19:19:26.404219320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Apr 13 19:19:26.404756 containerd[1489]: time="2026-04-13T19:19:26.404233840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Apr 13 19:19:26.404756 containerd[1489]: time="2026-04-13T19:19:26.404256640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Apr 13 19:19:26.404756 containerd[1489]: time="2026-04-13T19:19:26.404279360Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Apr 13 19:19:26.404756 containerd[1489]: time="2026-04-13T19:19:26.404313640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Apr 13 19:19:26.404756 containerd[1489]: time="2026-04-13T19:19:26.404330720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Apr 13 19:19:26.404756 containerd[1489]: time="2026-04-13T19:19:26.404345440Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Apr 13 19:19:26.405144 containerd[1489]: time="2026-04-13T19:19:26.404481640Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Apr 13 19:19:26.405144 containerd[1489]: time="2026-04-13T19:19:26.404508120Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Apr 13 19:19:26.405144 containerd[1489]: time="2026-04-13T19:19:26.404520520Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Apr 13 19:19:26.405144 containerd[1489]: time="2026-04-13T19:19:26.404538240Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Apr 13 19:19:26.405144 containerd[1489]: time="2026-04-13T19:19:26.404551480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Apr 13 19:19:26.405144 containerd[1489]: time="2026-04-13T19:19:26.404570120Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Apr 13 19:19:26.405144 containerd[1489]: time="2026-04-13T19:19:26.404584000Z" level=info msg="NRI interface is disabled by configuration." Apr 13 19:19:26.405144 containerd[1489]: time="2026-04-13T19:19:26.404594680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Apr 13 19:19:26.408056 containerd[1489]: time="2026-04-13T19:19:26.407131880Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Apr 13 19:19:26.408056 containerd[1489]: time="2026-04-13T19:19:26.407734240Z" level=info msg="Connect containerd service" Apr 13 19:19:26.408056 containerd[1489]: time="2026-04-13T19:19:26.407814760Z" level=info msg="using legacy CRI server" Apr 13 19:19:26.408056 containerd[1489]: time="2026-04-13T19:19:26.407840080Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Apr 13 19:19:26.408056 containerd[1489]: time="2026-04-13T19:19:26.407948120Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Apr 13 19:19:26.411694 containerd[1489]: time="2026-04-13T19:19:26.410250440Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 13 19:19:26.411694 containerd[1489]: time="2026-04-13T19:19:26.410651880Z" level=info msg="Start subscribing containerd event" Apr 13 19:19:26.411694 containerd[1489]: time="2026-04-13T19:19:26.410712520Z" level=info msg="Start recovering state" Apr 13 19:19:26.411694 containerd[1489]: time="2026-04-13T19:19:26.410789200Z" level=info msg="Start event monitor" Apr 13 19:19:26.411694 containerd[1489]: time="2026-04-13T19:19:26.410801960Z" level=info msg="Start snapshots syncer" Apr 13 19:19:26.411694 containerd[1489]: time="2026-04-13T19:19:26.410814000Z" level=info msg="Start cni network conf syncer for default" Apr 13 19:19:26.411694 containerd[1489]: time="2026-04-13T19:19:26.410822760Z" level=info msg="Start streaming server" Apr 13 19:19:26.413241 containerd[1489]: time="2026-04-13T19:19:26.413197280Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 13 19:19:26.413330 containerd[1489]: time="2026-04-13T19:19:26.413279440Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 13 19:19:26.413448 systemd[1]: Started containerd.service - containerd container runtime. Apr 13 19:19:26.414806 containerd[1489]: time="2026-04-13T19:19:26.414496720Z" level=info msg="containerd successfully booted in 0.114617s" Apr 13 19:19:26.491241 systemd-networkd[1382]: eth0: Gained IPv6LL Apr 13 19:19:26.491816 systemd-timesyncd[1368]: Network configuration changed, trying to establish connection. Apr 13 19:19:26.499111 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Apr 13 19:19:26.500450 systemd[1]: Reached target network-online.target - Network is Online. Apr 13 19:19:26.513259 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 19:19:26.523337 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Apr 13 19:19:26.566071 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Apr 13 19:19:26.668936 tar[1475]: linux-arm64/README.md Apr 13 19:19:26.683466 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Apr 13 19:19:26.812155 systemd-networkd[1382]: eth1: Gained IPv6LL Apr 13 19:19:26.812595 systemd-timesyncd[1368]: Network configuration changed, trying to establish connection. Apr 13 19:19:27.314356 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 19:19:27.332752 (kubelet)[1563]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 13 19:19:27.492757 sshd_keygen[1482]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 13 19:19:27.526915 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Apr 13 19:19:27.535689 systemd[1]: Starting issuegen.service - Generate /run/issue... Apr 13 19:19:27.547525 systemd[1]: issuegen.service: Deactivated successfully. Apr 13 19:19:27.547769 systemd[1]: Finished issuegen.service - Generate /run/issue. Apr 13 19:19:27.560438 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Apr 13 19:19:27.573279 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Apr 13 19:19:27.584528 systemd[1]: Started getty@tty1.service - Getty on tty1. Apr 13 19:19:27.588285 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Apr 13 19:19:27.589156 systemd[1]: Reached target getty.target - Login Prompts. Apr 13 19:19:27.589739 systemd[1]: Reached target multi-user.target - Multi-User System. Apr 13 19:19:27.591859 systemd[1]: Startup finished in 794ms (kernel) + 10.077s (initrd) + 4.852s (userspace) = 15.724s. Apr 13 19:19:27.828421 kubelet[1563]: E0413 19:19:27.828248 1563 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 13 19:19:27.833658 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 13 19:19:27.834277 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 13 19:19:38.084665 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 13 19:19:38.092389 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 19:19:38.226257 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 19:19:38.237388 (kubelet)[1598]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 13 19:19:38.289181 kubelet[1598]: E0413 19:19:38.289063 1598 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 13 19:19:38.292916 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 13 19:19:38.293196 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 13 19:19:48.543504 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Apr 13 19:19:48.550325 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 19:19:48.690448 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 19:19:48.695871 (kubelet)[1613]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 13 19:19:48.744992 kubelet[1613]: E0413 19:19:48.744915 1613 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 13 19:19:48.749170 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 13 19:19:48.749440 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 13 19:19:56.900680 systemd-timesyncd[1368]: Contacted time server 45.92.216.108:123 (2.flatcar.pool.ntp.org). Apr 13 19:19:56.900812 systemd-timesyncd[1368]: Initial clock synchronization to Mon 2026-04-13 19:19:57.243695 UTC. Apr 13 19:19:57.144399 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Apr 13 19:19:57.153380 systemd[1]: Started sshd@0-178.105.12.165:22-50.85.169.122:54154.service - OpenSSH per-connection server daemon (50.85.169.122:54154). Apr 13 19:19:57.286345 sshd[1621]: Accepted publickey for core from 50.85.169.122 port 54154 ssh2: RSA SHA256:iZ69s7jdfZeZWl77uzTdj7kYKrt9+aDLOIz6i/Hnoms Apr 13 19:19:57.288761 sshd[1621]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:19:57.301544 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Apr 13 19:19:57.314717 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Apr 13 19:19:57.322152 systemd-logind[1461]: New session 1 of user core. Apr 13 19:19:57.334736 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Apr 13 19:19:57.342511 systemd[1]: Starting user@500.service - User Manager for UID 500... Apr 13 19:19:57.349132 (systemd)[1625]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Apr 13 19:19:57.466479 systemd[1625]: Queued start job for default target default.target. Apr 13 19:19:57.477977 systemd[1625]: Created slice app.slice - User Application Slice. Apr 13 19:19:57.478072 systemd[1625]: Reached target paths.target - Paths. Apr 13 19:19:57.478592 systemd[1625]: Reached target timers.target - Timers. Apr 13 19:19:57.481471 systemd[1625]: Starting dbus.socket - D-Bus User Message Bus Socket... Apr 13 19:19:57.497391 systemd[1625]: Listening on dbus.socket - D-Bus User Message Bus Socket. Apr 13 19:19:57.497535 systemd[1625]: Reached target sockets.target - Sockets. Apr 13 19:19:57.497551 systemd[1625]: Reached target basic.target - Basic System. Apr 13 19:19:57.497606 systemd[1625]: Reached target default.target - Main User Target. Apr 13 19:19:57.497635 systemd[1625]: Startup finished in 139ms. Apr 13 19:19:57.498257 systemd[1]: Started user@500.service - User Manager for UID 500. Apr 13 19:19:57.510426 systemd[1]: Started session-1.scope - Session 1 of User core. Apr 13 19:19:57.641953 systemd[1]: Started sshd@1-178.105.12.165:22-50.85.169.122:54166.service - OpenSSH per-connection server daemon (50.85.169.122:54166). Apr 13 19:19:57.768255 sshd[1636]: Accepted publickey for core from 50.85.169.122 port 54166 ssh2: RSA SHA256:iZ69s7jdfZeZWl77uzTdj7kYKrt9+aDLOIz6i/Hnoms Apr 13 19:19:57.770351 sshd[1636]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:19:57.777154 systemd-logind[1461]: New session 2 of user core. Apr 13 19:19:57.787447 systemd[1]: Started session-2.scope - Session 2 of User core. Apr 13 19:19:57.894640 sshd[1636]: pam_unix(sshd:session): session closed for user core Apr 13 19:19:57.898942 systemd[1]: sshd@1-178.105.12.165:22-50.85.169.122:54166.service: Deactivated successfully. Apr 13 19:19:57.902590 systemd[1]: session-2.scope: Deactivated successfully. Apr 13 19:19:57.903822 systemd-logind[1461]: Session 2 logged out. Waiting for processes to exit. Apr 13 19:19:57.921174 systemd-logind[1461]: Removed session 2. Apr 13 19:19:57.927238 systemd[1]: Started sshd@2-178.105.12.165:22-50.85.169.122:54168.service - OpenSSH per-connection server daemon (50.85.169.122:54168). Apr 13 19:19:58.062228 sshd[1643]: Accepted publickey for core from 50.85.169.122 port 54168 ssh2: RSA SHA256:iZ69s7jdfZeZWl77uzTdj7kYKrt9+aDLOIz6i/Hnoms Apr 13 19:19:58.064822 sshd[1643]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:19:58.071415 systemd-logind[1461]: New session 3 of user core. Apr 13 19:19:58.081337 systemd[1]: Started session-3.scope - Session 3 of User core. Apr 13 19:19:58.175642 sshd[1643]: pam_unix(sshd:session): session closed for user core Apr 13 19:19:58.181723 systemd[1]: sshd@2-178.105.12.165:22-50.85.169.122:54168.service: Deactivated successfully. Apr 13 19:19:58.184114 systemd[1]: session-3.scope: Deactivated successfully. Apr 13 19:19:58.186997 systemd-logind[1461]: Session 3 logged out. Waiting for processes to exit. Apr 13 19:19:58.188360 systemd-logind[1461]: Removed session 3. Apr 13 19:19:58.211514 systemd[1]: Started sshd@3-178.105.12.165:22-50.85.169.122:54176.service - OpenSSH per-connection server daemon (50.85.169.122:54176). Apr 13 19:19:58.349689 sshd[1650]: Accepted publickey for core from 50.85.169.122 port 54176 ssh2: RSA SHA256:iZ69s7jdfZeZWl77uzTdj7kYKrt9+aDLOIz6i/Hnoms Apr 13 19:19:58.351118 sshd[1650]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:19:58.357256 systemd-logind[1461]: New session 4 of user core. Apr 13 19:19:58.370409 systemd[1]: Started session-4.scope - Session 4 of User core. Apr 13 19:19:58.477585 sshd[1650]: pam_unix(sshd:session): session closed for user core Apr 13 19:19:58.484462 systemd[1]: sshd@3-178.105.12.165:22-50.85.169.122:54176.service: Deactivated successfully. Apr 13 19:19:58.487990 systemd[1]: session-4.scope: Deactivated successfully. Apr 13 19:19:58.490635 systemd-logind[1461]: Session 4 logged out. Waiting for processes to exit. Apr 13 19:19:58.491876 systemd-logind[1461]: Removed session 4. Apr 13 19:19:58.508602 systemd[1]: Started sshd@4-178.105.12.165:22-50.85.169.122:54180.service - OpenSSH per-connection server daemon (50.85.169.122:54180). Apr 13 19:19:58.652104 sshd[1657]: Accepted publickey for core from 50.85.169.122 port 54180 ssh2: RSA SHA256:iZ69s7jdfZeZWl77uzTdj7kYKrt9+aDLOIz6i/Hnoms Apr 13 19:19:58.653688 sshd[1657]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:19:58.659352 systemd-logind[1461]: New session 5 of user core. Apr 13 19:19:58.672469 systemd[1]: Started session-5.scope - Session 5 of User core. Apr 13 19:19:58.775714 sudo[1660]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Apr 13 19:19:58.776042 sudo[1660]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 13 19:19:58.777492 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Apr 13 19:19:58.782344 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 19:19:58.800827 sudo[1660]: pam_unix(sudo:session): session closed for user root Apr 13 19:19:58.819703 sshd[1657]: pam_unix(sshd:session): session closed for user core Apr 13 19:19:58.824757 systemd[1]: sshd@4-178.105.12.165:22-50.85.169.122:54180.service: Deactivated successfully. Apr 13 19:19:58.831720 systemd[1]: session-5.scope: Deactivated successfully. Apr 13 19:19:58.833382 systemd-logind[1461]: Session 5 logged out. Waiting for processes to exit. Apr 13 19:19:58.858487 systemd[1]: Started sshd@5-178.105.12.165:22-50.85.169.122:54192.service - OpenSSH per-connection server daemon (50.85.169.122:54192). Apr 13 19:19:58.861247 systemd-logind[1461]: Removed session 5. Apr 13 19:19:58.946512 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 19:19:58.948473 (kubelet)[1675]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 13 19:19:58.983515 sshd[1668]: Accepted publickey for core from 50.85.169.122 port 54192 ssh2: RSA SHA256:iZ69s7jdfZeZWl77uzTdj7kYKrt9+aDLOIz6i/Hnoms Apr 13 19:19:58.986346 sshd[1668]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:19:58.994440 systemd-logind[1461]: New session 6 of user core. Apr 13 19:19:58.998311 systemd[1]: Started session-6.scope - Session 6 of User core. Apr 13 19:19:58.999743 kubelet[1675]: E0413 19:19:58.999677 1675 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 13 19:19:59.003789 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 13 19:19:59.003949 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 13 19:19:59.089167 sudo[1685]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Apr 13 19:19:59.089511 sudo[1685]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 13 19:19:59.093710 sudo[1685]: pam_unix(sudo:session): session closed for user root Apr 13 19:19:59.100587 sudo[1684]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Apr 13 19:19:59.100890 sudo[1684]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 13 19:19:59.117337 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Apr 13 19:19:59.121849 auditctl[1688]: No rules Apr 13 19:19:59.123419 systemd[1]: audit-rules.service: Deactivated successfully. Apr 13 19:19:59.123688 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Apr 13 19:19:59.134941 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 13 19:19:59.161867 augenrules[1706]: No rules Apr 13 19:19:59.163544 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 13 19:19:59.165781 sudo[1684]: pam_unix(sudo:session): session closed for user root Apr 13 19:19:59.183772 sshd[1668]: pam_unix(sshd:session): session closed for user core Apr 13 19:19:59.189383 systemd[1]: sshd@5-178.105.12.165:22-50.85.169.122:54192.service: Deactivated successfully. Apr 13 19:19:59.191227 systemd[1]: session-6.scope: Deactivated successfully. Apr 13 19:19:59.192954 systemd-logind[1461]: Session 6 logged out. Waiting for processes to exit. Apr 13 19:19:59.194619 systemd-logind[1461]: Removed session 6. Apr 13 19:19:59.221606 systemd[1]: Started sshd@6-178.105.12.165:22-50.85.169.122:50218.service - OpenSSH per-connection server daemon (50.85.169.122:50218). Apr 13 19:19:59.349279 sshd[1714]: Accepted publickey for core from 50.85.169.122 port 50218 ssh2: RSA SHA256:iZ69s7jdfZeZWl77uzTdj7kYKrt9+aDLOIz6i/Hnoms Apr 13 19:19:59.351842 sshd[1714]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:19:59.360404 systemd-logind[1461]: New session 7 of user core. Apr 13 19:19:59.367316 systemd[1]: Started session-7.scope - Session 7 of User core. Apr 13 19:19:59.453492 sudo[1717]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 13 19:19:59.453810 sudo[1717]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 13 19:19:59.784640 (dockerd)[1733]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Apr 13 19:19:59.785646 systemd[1]: Starting docker.service - Docker Application Container Engine... Apr 13 19:20:00.055113 dockerd[1733]: time="2026-04-13T19:20:00.054435855Z" level=info msg="Starting up" Apr 13 19:20:00.174358 dockerd[1733]: time="2026-04-13T19:20:00.173982076Z" level=info msg="Loading containers: start." Apr 13 19:20:00.307325 kernel: Initializing XFRM netlink socket Apr 13 19:20:00.414292 systemd-networkd[1382]: docker0: Link UP Apr 13 19:20:00.444663 dockerd[1733]: time="2026-04-13T19:20:00.444318743Z" level=info msg="Loading containers: done." Apr 13 19:20:00.465458 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3101903572-merged.mount: Deactivated successfully. Apr 13 19:20:00.469768 dockerd[1733]: time="2026-04-13T19:20:00.469270480Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 13 19:20:00.469768 dockerd[1733]: time="2026-04-13T19:20:00.469400020Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Apr 13 19:20:00.469768 dockerd[1733]: time="2026-04-13T19:20:00.469566266Z" level=info msg="Daemon has completed initialization" Apr 13 19:20:00.530202 systemd[1]: Started docker.service - Docker Application Container Engine. Apr 13 19:20:00.530929 dockerd[1733]: time="2026-04-13T19:20:00.530794460Z" level=info msg="API listen on /run/docker.sock" Apr 13 19:20:01.084108 containerd[1489]: time="2026-04-13T19:20:01.083697100Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.6\"" Apr 13 19:20:01.917999 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3480874854.mount: Deactivated successfully. Apr 13 19:20:03.079335 containerd[1489]: time="2026-04-13T19:20:03.079079186Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:20:03.081755 containerd[1489]: time="2026-04-13T19:20:03.081285476Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.6: active requests=0, bytes read=24476988" Apr 13 19:20:03.083662 containerd[1489]: time="2026-04-13T19:20:03.083121489Z" level=info msg="ImageCreate event name:\"sha256:63b89433458ca86408a1468b411c42a89f4660e49c87651709b5c4f063f4849f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:20:03.092095 containerd[1489]: time="2026-04-13T19:20:03.092038258Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:698dcff68850a9b3a276ae22d304679828cf8b87e9c5e3a73304f0ea03f91570\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:20:03.094873 containerd[1489]: time="2026-04-13T19:20:03.094783940Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.6\" with image id \"sha256:63b89433458ca86408a1468b411c42a89f4660e49c87651709b5c4f063f4849f\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.6\", repo digest \"registry.k8s.io/kube-apiserver@sha256:698dcff68850a9b3a276ae22d304679828cf8b87e9c5e3a73304f0ea03f91570\", size \"24473489\" in 2.011024275s" Apr 13 19:20:03.095636 containerd[1489]: time="2026-04-13T19:20:03.095248927Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.6\" returns image reference \"sha256:63b89433458ca86408a1468b411c42a89f4660e49c87651709b5c4f063f4849f\"" Apr 13 19:20:03.096418 containerd[1489]: time="2026-04-13T19:20:03.096385320Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.6\"" Apr 13 19:20:04.358486 containerd[1489]: time="2026-04-13T19:20:04.358388068Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:20:04.360606 containerd[1489]: time="2026-04-13T19:20:04.360521546Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.6: active requests=0, bytes read=19139662" Apr 13 19:20:04.360815 containerd[1489]: time="2026-04-13T19:20:04.360779053Z" level=info msg="ImageCreate event name:\"sha256:6660e82e8aca5f16241c2665727858d15219f0f794a62238218e253cdcecb8d7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:20:04.365050 containerd[1489]: time="2026-04-13T19:20:04.364843615Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:ba0a07668e2cfac6b1cac60e759411962dba0e40bdd1585242c4358d840095d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:20:04.367221 containerd[1489]: time="2026-04-13T19:20:04.366101340Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.6\" with image id \"sha256:6660e82e8aca5f16241c2665727858d15219f0f794a62238218e253cdcecb8d7\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.6\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:ba0a07668e2cfac6b1cac60e759411962dba0e40bdd1585242c4358d840095d0\", size \"20617664\" in 1.269665659s" Apr 13 19:20:04.367221 containerd[1489]: time="2026-04-13T19:20:04.366151898Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.6\" returns image reference \"sha256:6660e82e8aca5f16241c2665727858d15219f0f794a62238218e253cdcecb8d7\"" Apr 13 19:20:04.367221 containerd[1489]: time="2026-04-13T19:20:04.366579952Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.6\"" Apr 13 19:20:05.580158 containerd[1489]: time="2026-04-13T19:20:05.580099329Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:20:05.581845 containerd[1489]: time="2026-04-13T19:20:05.581790423Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.6: active requests=0, bytes read=14195559" Apr 13 19:20:05.584051 containerd[1489]: time="2026-04-13T19:20:05.582681167Z" level=info msg="ImageCreate event name:\"sha256:ca0c06ae95330c4e10d8daa0957779be495432a703b748d767d63111101eed54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:20:05.586222 containerd[1489]: time="2026-04-13T19:20:05.586179564Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:5034a9ecf42eb967e5c9f6faace4ec20747a8e16a170ebdaf2eb31878b2da74a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:20:05.588019 containerd[1489]: time="2026-04-13T19:20:05.587950701Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.6\" with image id \"sha256:ca0c06ae95330c4e10d8daa0957779be495432a703b748d767d63111101eed54\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.6\", repo digest \"registry.k8s.io/kube-scheduler@sha256:5034a9ecf42eb967e5c9f6faace4ec20747a8e16a170ebdaf2eb31878b2da74a\", size \"15673579\" in 1.221339701s" Apr 13 19:20:05.588019 containerd[1489]: time="2026-04-13T19:20:05.588014386Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.6\" returns image reference \"sha256:ca0c06ae95330c4e10d8daa0957779be495432a703b748d767d63111101eed54\"" Apr 13 19:20:05.588717 containerd[1489]: time="2026-04-13T19:20:05.588653270Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.6\"" Apr 13 19:20:06.575288 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3597048707.mount: Deactivated successfully. Apr 13 19:20:06.823078 containerd[1489]: time="2026-04-13T19:20:06.821867079Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:20:06.823622 containerd[1489]: time="2026-04-13T19:20:06.823577710Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.6: active requests=0, bytes read=22697125" Apr 13 19:20:06.824450 containerd[1489]: time="2026-04-13T19:20:06.824406872Z" level=info msg="ImageCreate event name:\"sha256:c4c6d0b908d750e54be07f6a15d89db69fc1246039cc5e52c7eeeee886a1a713\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:20:06.828143 containerd[1489]: time="2026-04-13T19:20:06.827977029Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:d0921102f744d15133bc3a1cb54d8cbf323e00f2f73ea5a79c763202c6db18aa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:20:06.828973 containerd[1489]: time="2026-04-13T19:20:06.828917327Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.6\" with image id \"sha256:c4c6d0b908d750e54be07f6a15d89db69fc1246039cc5e52c7eeeee886a1a713\", repo tag \"registry.k8s.io/kube-proxy:v1.34.6\", repo digest \"registry.k8s.io/kube-proxy@sha256:d0921102f744d15133bc3a1cb54d8cbf323e00f2f73ea5a79c763202c6db18aa\", size \"22696118\" in 1.240205181s" Apr 13 19:20:06.828973 containerd[1489]: time="2026-04-13T19:20:06.828962462Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.6\" returns image reference \"sha256:c4c6d0b908d750e54be07f6a15d89db69fc1246039cc5e52c7eeeee886a1a713\"" Apr 13 19:20:06.829570 containerd[1489]: time="2026-04-13T19:20:06.829510077Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Apr 13 19:20:07.420911 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount790582329.mount: Deactivated successfully. Apr 13 19:20:08.387058 containerd[1489]: time="2026-04-13T19:20:08.385553152Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:20:08.387842 containerd[1489]: time="2026-04-13T19:20:08.387801667Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=20395498" Apr 13 19:20:08.388541 containerd[1489]: time="2026-04-13T19:20:08.388500826Z" level=info msg="ImageCreate event name:\"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:20:08.392993 containerd[1489]: time="2026-04-13T19:20:08.392917913Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:20:08.394757 containerd[1489]: time="2026-04-13T19:20:08.394696390Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"20392204\" in 1.565144784s" Apr 13 19:20:08.394757 containerd[1489]: time="2026-04-13T19:20:08.394749186Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc\"" Apr 13 19:20:08.395377 containerd[1489]: time="2026-04-13T19:20:08.395325060Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Apr 13 19:20:08.872874 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2392236863.mount: Deactivated successfully. Apr 13 19:20:08.884209 containerd[1489]: time="2026-04-13T19:20:08.883190256Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:20:08.884209 containerd[1489]: time="2026-04-13T19:20:08.884169313Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=268729" Apr 13 19:20:08.885854 containerd[1489]: time="2026-04-13T19:20:08.885814243Z" level=info msg="ImageCreate event name:\"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:20:08.889004 containerd[1489]: time="2026-04-13T19:20:08.888944907Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:20:08.890570 containerd[1489]: time="2026-04-13T19:20:08.890514905Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"267939\" in 495.149006ms" Apr 13 19:20:08.890570 containerd[1489]: time="2026-04-13T19:20:08.890562248Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\"" Apr 13 19:20:08.891847 containerd[1489]: time="2026-04-13T19:20:08.891810498Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.5-0\"" Apr 13 19:20:09.142193 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Apr 13 19:20:09.148441 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 19:20:09.338040 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 19:20:09.352676 (kubelet)[2010]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 13 19:20:09.411851 kubelet[2010]: E0413 19:20:09.411662 2010 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 13 19:20:09.415826 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 13 19:20:09.416215 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 13 19:20:09.500173 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3619634280.mount: Deactivated successfully. Apr 13 19:20:10.353715 containerd[1489]: time="2026-04-13T19:20:10.353599669Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.5-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:20:10.356008 containerd[1489]: time="2026-04-13T19:20:10.355896430Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.5-0: active requests=0, bytes read=21139164" Apr 13 19:20:10.357377 containerd[1489]: time="2026-04-13T19:20:10.357322535Z" level=info msg="ImageCreate event name:\"sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:20:10.361409 containerd[1489]: time="2026-04-13T19:20:10.361343438Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:20:10.363850 containerd[1489]: time="2026-04-13T19:20:10.363774567Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.5-0\" with image id \"sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42\", repo tag \"registry.k8s.io/etcd:3.6.5-0\", repo digest \"registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534\", size \"21136588\" in 1.471924142s" Apr 13 19:20:10.363850 containerd[1489]: time="2026-04-13T19:20:10.363828532Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.5-0\" returns image reference \"sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42\"" Apr 13 19:20:10.875991 update_engine[1463]: I20260413 19:20:10.875254 1463 update_attempter.cc:509] Updating boot flags... Apr 13 19:20:10.940096 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 33 scanned by (udev-worker) (2110) Apr 13 19:20:11.042048 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 33 scanned by (udev-worker) (2114) Apr 13 19:20:15.593419 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 19:20:15.606645 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 19:20:15.662790 systemd[1]: Reloading requested from client PID 2126 ('systemctl') (unit session-7.scope)... Apr 13 19:20:15.662810 systemd[1]: Reloading... Apr 13 19:20:15.817050 zram_generator::config[2164]: No configuration found. Apr 13 19:20:15.936108 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 13 19:20:16.010701 systemd[1]: Reloading finished in 347 ms. Apr 13 19:20:16.067855 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Apr 13 19:20:16.071767 systemd[1]: kubelet.service: Failed with result 'signal'. Apr 13 19:20:16.072555 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 19:20:16.079559 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 19:20:16.230312 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 19:20:16.233832 (kubelet)[2213]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 13 19:20:16.277966 kubelet[2213]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 13 19:20:16.278427 kubelet[2213]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 13 19:20:16.278585 kubelet[2213]: I0413 19:20:16.278550 2213 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 13 19:20:16.834782 kubelet[2213]: I0413 19:20:16.834726 2213 server.go:529] "Kubelet version" kubeletVersion="v1.34.4" Apr 13 19:20:16.835169 kubelet[2213]: I0413 19:20:16.835146 2213 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 13 19:20:16.835311 kubelet[2213]: I0413 19:20:16.835293 2213 watchdog_linux.go:95] "Systemd watchdog is not enabled" Apr 13 19:20:16.837052 kubelet[2213]: I0413 19:20:16.835405 2213 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 13 19:20:16.837052 kubelet[2213]: I0413 19:20:16.836000 2213 server.go:956] "Client rotation is on, will bootstrap in background" Apr 13 19:20:16.845512 kubelet[2213]: E0413 19:20:16.845453 2213 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://178.105.12.165:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 178.105.12.165:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 13 19:20:16.846252 kubelet[2213]: I0413 19:20:16.846216 2213 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 13 19:20:16.853662 kubelet[2213]: E0413 19:20:16.853610 2213 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 13 19:20:16.853953 kubelet[2213]: I0413 19:20:16.853884 2213 server.go:1400] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Apr 13 19:20:16.856415 kubelet[2213]: I0413 19:20:16.856387 2213 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Apr 13 19:20:16.856660 kubelet[2213]: I0413 19:20:16.856627 2213 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 13 19:20:16.857190 kubelet[2213]: I0413 19:20:16.856662 2213 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-7-b-7ea64c4796","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 13 19:20:16.857190 kubelet[2213]: I0413 19:20:16.857179 2213 topology_manager.go:138] "Creating topology manager with none policy" Apr 13 19:20:16.857190 kubelet[2213]: I0413 19:20:16.857191 2213 container_manager_linux.go:306] "Creating device plugin manager" Apr 13 19:20:16.857343 kubelet[2213]: I0413 19:20:16.857326 2213 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Apr 13 19:20:16.860757 kubelet[2213]: I0413 19:20:16.860706 2213 state_mem.go:36] "Initialized new in-memory state store" Apr 13 19:20:16.863567 kubelet[2213]: I0413 19:20:16.863531 2213 kubelet.go:475] "Attempting to sync node with API server" Apr 13 19:20:16.863567 kubelet[2213]: I0413 19:20:16.863573 2213 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 13 19:20:16.863715 kubelet[2213]: I0413 19:20:16.863608 2213 kubelet.go:387] "Adding apiserver pod source" Apr 13 19:20:16.863715 kubelet[2213]: I0413 19:20:16.863622 2213 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 13 19:20:16.866326 kubelet[2213]: E0413 19:20:16.866279 2213 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://178.105.12.165:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-7-b-7ea64c4796&limit=500&resourceVersion=0\": dial tcp 178.105.12.165:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 13 19:20:16.866621 kubelet[2213]: I0413 19:20:16.866596 2213 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 13 19:20:16.867706 kubelet[2213]: I0413 19:20:16.867669 2213 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 13 19:20:16.867868 kubelet[2213]: I0413 19:20:16.867850 2213 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Apr 13 19:20:16.868239 kubelet[2213]: W0413 19:20:16.868222 2213 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 13 19:20:16.871255 kubelet[2213]: I0413 19:20:16.871230 2213 server.go:1262] "Started kubelet" Apr 13 19:20:16.871677 kubelet[2213]: E0413 19:20:16.871649 2213 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://178.105.12.165:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 178.105.12.165:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 13 19:20:16.874240 kubelet[2213]: I0413 19:20:16.874177 2213 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 13 19:20:16.876060 kubelet[2213]: I0413 19:20:16.875677 2213 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 13 19:20:16.876060 kubelet[2213]: I0413 19:20:16.875767 2213 server_v1.go:49] "podresources" method="list" useActivePods=true Apr 13 19:20:16.876060 kubelet[2213]: I0413 19:20:16.875803 2213 server.go:310] "Adding debug handlers to kubelet server" Apr 13 19:20:16.876395 kubelet[2213]: I0413 19:20:16.876375 2213 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 13 19:20:16.877958 kubelet[2213]: E0413 19:20:16.876643 2213 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://178.105.12.165:6443/api/v1/namespaces/default/events\": dial tcp 178.105.12.165:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-3-7-b-7ea64c4796.18a600d4c454af7f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-7-b-7ea64c4796,UID:ci-4081-3-7-b-7ea64c4796,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-3-7-b-7ea64c4796,},FirstTimestamp:2026-04-13 19:20:16.871190399 +0000 UTC m=+0.633187064,LastTimestamp:2026-04-13 19:20:16.871190399 +0000 UTC m=+0.633187064,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-7-b-7ea64c4796,}" Apr 13 19:20:16.881331 kubelet[2213]: I0413 19:20:16.881137 2213 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 13 19:20:16.882292 kubelet[2213]: I0413 19:20:16.881520 2213 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 13 19:20:16.885156 kubelet[2213]: E0413 19:20:16.884735 2213 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4081-3-7-b-7ea64c4796\" not found" Apr 13 19:20:16.885314 kubelet[2213]: I0413 19:20:16.885170 2213 volume_manager.go:313] "Starting Kubelet Volume Manager" Apr 13 19:20:16.886068 kubelet[2213]: I0413 19:20:16.885413 2213 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Apr 13 19:20:16.886068 kubelet[2213]: I0413 19:20:16.885485 2213 reconciler.go:29] "Reconciler: start to sync state" Apr 13 19:20:16.886068 kubelet[2213]: E0413 19:20:16.885958 2213 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://178.105.12.165:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 178.105.12.165:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 13 19:20:16.886693 kubelet[2213]: E0413 19:20:16.886655 2213 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 13 19:20:16.887224 kubelet[2213]: E0413 19:20:16.887173 2213 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://178.105.12.165:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-7-b-7ea64c4796?timeout=10s\": dial tcp 178.105.12.165:6443: connect: connection refused" interval="200ms" Apr 13 19:20:16.887958 kubelet[2213]: I0413 19:20:16.887401 2213 factory.go:223] Registration of the systemd container factory successfully Apr 13 19:20:16.887958 kubelet[2213]: I0413 19:20:16.887499 2213 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 13 19:20:16.888890 kubelet[2213]: I0413 19:20:16.888861 2213 factory.go:223] Registration of the containerd container factory successfully Apr 13 19:20:16.926998 kubelet[2213]: I0413 19:20:16.926939 2213 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Apr 13 19:20:16.929971 kubelet[2213]: I0413 19:20:16.929104 2213 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Apr 13 19:20:16.929971 kubelet[2213]: I0413 19:20:16.929139 2213 status_manager.go:244] "Starting to sync pod status with apiserver" Apr 13 19:20:16.929971 kubelet[2213]: I0413 19:20:16.929169 2213 kubelet.go:2428] "Starting kubelet main sync loop" Apr 13 19:20:16.929971 kubelet[2213]: E0413 19:20:16.929275 2213 kubelet.go:2452] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 13 19:20:16.932008 kubelet[2213]: I0413 19:20:16.931881 2213 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 13 19:20:16.932008 kubelet[2213]: I0413 19:20:16.931902 2213 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 13 19:20:16.932455 kubelet[2213]: E0413 19:20:16.932245 2213 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://178.105.12.165:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 178.105.12.165:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 13 19:20:16.933057 kubelet[2213]: I0413 19:20:16.932706 2213 state_mem.go:36] "Initialized new in-memory state store" Apr 13 19:20:16.935873 kubelet[2213]: I0413 19:20:16.935839 2213 policy_none.go:49] "None policy: Start" Apr 13 19:20:16.935873 kubelet[2213]: I0413 19:20:16.935868 2213 memory_manager.go:187] "Starting memorymanager" policy="None" Apr 13 19:20:16.935873 kubelet[2213]: I0413 19:20:16.935882 2213 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Apr 13 19:20:16.937849 kubelet[2213]: I0413 19:20:16.937809 2213 policy_none.go:47] "Start" Apr 13 19:20:16.945736 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Apr 13 19:20:16.963777 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Apr 13 19:20:16.968416 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Apr 13 19:20:16.981744 kubelet[2213]: E0413 19:20:16.981695 2213 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 13 19:20:16.982841 kubelet[2213]: I0413 19:20:16.982407 2213 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 13 19:20:16.982841 kubelet[2213]: I0413 19:20:16.982447 2213 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 13 19:20:16.984755 kubelet[2213]: I0413 19:20:16.984684 2213 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 13 19:20:16.986794 kubelet[2213]: E0413 19:20:16.986585 2213 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 13 19:20:16.986794 kubelet[2213]: E0413 19:20:16.986702 2213 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081-3-7-b-7ea64c4796\" not found" Apr 13 19:20:17.047174 systemd[1]: Created slice kubepods-burstable-podf7f094c83c371850051f7865e63860e3.slice - libcontainer container kubepods-burstable-podf7f094c83c371850051f7865e63860e3.slice. Apr 13 19:20:17.069822 kubelet[2213]: E0413 19:20:17.068776 2213 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-7-b-7ea64c4796\" not found" node="ci-4081-3-7-b-7ea64c4796" Apr 13 19:20:17.076593 systemd[1]: Created slice kubepods-burstable-pod2d45fe8b65e9e27364b556dd790423ac.slice - libcontainer container kubepods-burstable-pod2d45fe8b65e9e27364b556dd790423ac.slice. Apr 13 19:20:17.080728 kubelet[2213]: E0413 19:20:17.080468 2213 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-7-b-7ea64c4796\" not found" node="ci-4081-3-7-b-7ea64c4796" Apr 13 19:20:17.088078 kubelet[2213]: I0413 19:20:17.086062 2213 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-7-b-7ea64c4796" Apr 13 19:20:17.088078 kubelet[2213]: I0413 19:20:17.086714 2213 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f7f094c83c371850051f7865e63860e3-ca-certs\") pod \"kube-controller-manager-ci-4081-3-7-b-7ea64c4796\" (UID: \"f7f094c83c371850051f7865e63860e3\") " pod="kube-system/kube-controller-manager-ci-4081-3-7-b-7ea64c4796" Apr 13 19:20:17.088078 kubelet[2213]: I0413 19:20:17.086773 2213 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f7f094c83c371850051f7865e63860e3-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-7-b-7ea64c4796\" (UID: \"f7f094c83c371850051f7865e63860e3\") " pod="kube-system/kube-controller-manager-ci-4081-3-7-b-7ea64c4796" Apr 13 19:20:17.088078 kubelet[2213]: I0413 19:20:17.086809 2213 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f7f094c83c371850051f7865e63860e3-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-7-b-7ea64c4796\" (UID: \"f7f094c83c371850051f7865e63860e3\") " pod="kube-system/kube-controller-manager-ci-4081-3-7-b-7ea64c4796" Apr 13 19:20:17.088078 kubelet[2213]: I0413 19:20:17.086837 2213 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2d45fe8b65e9e27364b556dd790423ac-kubeconfig\") pod \"kube-scheduler-ci-4081-3-7-b-7ea64c4796\" (UID: \"2d45fe8b65e9e27364b556dd790423ac\") " pod="kube-system/kube-scheduler-ci-4081-3-7-b-7ea64c4796" Apr 13 19:20:17.088078 kubelet[2213]: I0413 19:20:17.086864 2213 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/36edd602cbd20ad738302bdb873875ec-ca-certs\") pod \"kube-apiserver-ci-4081-3-7-b-7ea64c4796\" (UID: \"36edd602cbd20ad738302bdb873875ec\") " pod="kube-system/kube-apiserver-ci-4081-3-7-b-7ea64c4796" Apr 13 19:20:17.088435 kubelet[2213]: I0413 19:20:17.086896 2213 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/36edd602cbd20ad738302bdb873875ec-k8s-certs\") pod \"kube-apiserver-ci-4081-3-7-b-7ea64c4796\" (UID: \"36edd602cbd20ad738302bdb873875ec\") " pod="kube-system/kube-apiserver-ci-4081-3-7-b-7ea64c4796" Apr 13 19:20:17.088435 kubelet[2213]: I0413 19:20:17.086921 2213 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/36edd602cbd20ad738302bdb873875ec-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-7-b-7ea64c4796\" (UID: \"36edd602cbd20ad738302bdb873875ec\") " pod="kube-system/kube-apiserver-ci-4081-3-7-b-7ea64c4796" Apr 13 19:20:17.088435 kubelet[2213]: I0413 19:20:17.086951 2213 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/f7f094c83c371850051f7865e63860e3-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-7-b-7ea64c4796\" (UID: \"f7f094c83c371850051f7865e63860e3\") " pod="kube-system/kube-controller-manager-ci-4081-3-7-b-7ea64c4796" Apr 13 19:20:17.088435 kubelet[2213]: I0413 19:20:17.086986 2213 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f7f094c83c371850051f7865e63860e3-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-7-b-7ea64c4796\" (UID: \"f7f094c83c371850051f7865e63860e3\") " pod="kube-system/kube-controller-manager-ci-4081-3-7-b-7ea64c4796" Apr 13 19:20:17.090302 kubelet[2213]: E0413 19:20:17.089235 2213 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://178.105.12.165:6443/api/v1/nodes\": dial tcp 178.105.12.165:6443: connect: connection refused" node="ci-4081-3-7-b-7ea64c4796" Apr 13 19:20:17.090302 kubelet[2213]: E0413 19:20:17.089349 2213 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://178.105.12.165:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-7-b-7ea64c4796?timeout=10s\": dial tcp 178.105.12.165:6443: connect: connection refused" interval="400ms" Apr 13 19:20:17.097096 systemd[1]: Created slice kubepods-burstable-pod36edd602cbd20ad738302bdb873875ec.slice - libcontainer container kubepods-burstable-pod36edd602cbd20ad738302bdb873875ec.slice. Apr 13 19:20:17.102130 kubelet[2213]: E0413 19:20:17.101881 2213 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-7-b-7ea64c4796\" not found" node="ci-4081-3-7-b-7ea64c4796" Apr 13 19:20:17.293909 kubelet[2213]: I0413 19:20:17.292974 2213 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-7-b-7ea64c4796" Apr 13 19:20:17.293909 kubelet[2213]: E0413 19:20:17.293872 2213 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://178.105.12.165:6443/api/v1/nodes\": dial tcp 178.105.12.165:6443: connect: connection refused" node="ci-4081-3-7-b-7ea64c4796" Apr 13 19:20:17.376122 containerd[1489]: time="2026-04-13T19:20:17.375966364Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-7-b-7ea64c4796,Uid:f7f094c83c371850051f7865e63860e3,Namespace:kube-system,Attempt:0,}" Apr 13 19:20:17.384227 containerd[1489]: time="2026-04-13T19:20:17.384174350Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-7-b-7ea64c4796,Uid:2d45fe8b65e9e27364b556dd790423ac,Namespace:kube-system,Attempt:0,}" Apr 13 19:20:17.410533 containerd[1489]: time="2026-04-13T19:20:17.410219597Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-7-b-7ea64c4796,Uid:36edd602cbd20ad738302bdb873875ec,Namespace:kube-system,Attempt:0,}" Apr 13 19:20:17.491070 kubelet[2213]: E0413 19:20:17.490953 2213 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://178.105.12.165:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-7-b-7ea64c4796?timeout=10s\": dial tcp 178.105.12.165:6443: connect: connection refused" interval="800ms" Apr 13 19:20:17.551765 kubelet[2213]: E0413 19:20:17.551588 2213 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://178.105.12.165:6443/api/v1/namespaces/default/events\": dial tcp 178.105.12.165:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-3-7-b-7ea64c4796.18a600d4c454af7f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-7-b-7ea64c4796,UID:ci-4081-3-7-b-7ea64c4796,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-3-7-b-7ea64c4796,},FirstTimestamp:2026-04-13 19:20:16.871190399 +0000 UTC m=+0.633187064,LastTimestamp:2026-04-13 19:20:16.871190399 +0000 UTC m=+0.633187064,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-7-b-7ea64c4796,}" Apr 13 19:20:17.697376 kubelet[2213]: I0413 19:20:17.697314 2213 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-7-b-7ea64c4796" Apr 13 19:20:17.697994 kubelet[2213]: E0413 19:20:17.697851 2213 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://178.105.12.165:6443/api/v1/nodes\": dial tcp 178.105.12.165:6443: connect: connection refused" node="ci-4081-3-7-b-7ea64c4796" Apr 13 19:20:17.793993 kubelet[2213]: E0413 19:20:17.793872 2213 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://178.105.12.165:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 178.105.12.165:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 13 19:20:17.828781 kubelet[2213]: E0413 19:20:17.828688 2213 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://178.105.12.165:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 178.105.12.165:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 13 19:20:17.866547 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1402152104.mount: Deactivated successfully. Apr 13 19:20:17.877099 containerd[1489]: time="2026-04-13T19:20:17.876994837Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 13 19:20:17.883528 containerd[1489]: time="2026-04-13T19:20:17.883462510Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 13 19:20:17.886689 containerd[1489]: time="2026-04-13T19:20:17.885143528Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 13 19:20:17.887308 containerd[1489]: time="2026-04-13T19:20:17.887266976Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 13 19:20:17.889320 containerd[1489]: time="2026-04-13T19:20:17.889267821Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 13 19:20:17.894768 containerd[1489]: time="2026-04-13T19:20:17.894700709Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269193" Apr 13 19:20:17.900308 containerd[1489]: time="2026-04-13T19:20:17.900259169Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 13 19:20:17.902891 containerd[1489]: time="2026-04-13T19:20:17.902838126Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 13 19:20:17.905402 containerd[1489]: time="2026-04-13T19:20:17.905355541Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 529.246835ms" Apr 13 19:20:17.908834 containerd[1489]: time="2026-04-13T19:20:17.908548596Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 524.20017ms" Apr 13 19:20:17.928839 containerd[1489]: time="2026-04-13T19:20:17.928045010Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 517.723992ms" Apr 13 19:20:18.070460 containerd[1489]: time="2026-04-13T19:20:18.069675945Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 19:20:18.070460 containerd[1489]: time="2026-04-13T19:20:18.069745927Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 19:20:18.070460 containerd[1489]: time="2026-04-13T19:20:18.069763813Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:20:18.070460 containerd[1489]: time="2026-04-13T19:20:18.069858940Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:20:18.071789 containerd[1489]: time="2026-04-13T19:20:18.071470116Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 19:20:18.071789 containerd[1489]: time="2026-04-13T19:20:18.071531836Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 19:20:18.071789 containerd[1489]: time="2026-04-13T19:20:18.071546915Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:20:18.071789 containerd[1489]: time="2026-04-13T19:20:18.071724335Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:20:18.086843 containerd[1489]: time="2026-04-13T19:20:18.086723052Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 19:20:18.086843 containerd[1489]: time="2026-04-13T19:20:18.086793595Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 19:20:18.086843 containerd[1489]: time="2026-04-13T19:20:18.086815211Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:20:18.091050 containerd[1489]: time="2026-04-13T19:20:18.089460027Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:20:18.111853 systemd[1]: Started cri-containerd-72c21b8efb400c0fba4c1dd7106509494e23fc6b03bea8062e067327b2c71772.scope - libcontainer container 72c21b8efb400c0fba4c1dd7106509494e23fc6b03bea8062e067327b2c71772. Apr 13 19:20:18.118346 systemd[1]: Started cri-containerd-80405b492dac2e5f0ce667509ac71944b0b9f2bdec023ecb0494b201667a7d2b.scope - libcontainer container 80405b492dac2e5f0ce667509ac71944b0b9f2bdec023ecb0494b201667a7d2b. Apr 13 19:20:18.132972 systemd[1]: Started cri-containerd-100d50d6a8a4a97423460fb6f5196d773c244c15349b585783de8f2ea721c455.scope - libcontainer container 100d50d6a8a4a97423460fb6f5196d773c244c15349b585783de8f2ea721c455. Apr 13 19:20:18.186695 kubelet[2213]: E0413 19:20:18.186652 2213 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://178.105.12.165:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 178.105.12.165:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 13 19:20:18.191972 containerd[1489]: time="2026-04-13T19:20:18.191919207Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-7-b-7ea64c4796,Uid:36edd602cbd20ad738302bdb873875ec,Namespace:kube-system,Attempt:0,} returns sandbox id \"72c21b8efb400c0fba4c1dd7106509494e23fc6b03bea8062e067327b2c71772\"" Apr 13 19:20:18.201315 containerd[1489]: time="2026-04-13T19:20:18.201261422Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-7-b-7ea64c4796,Uid:f7f094c83c371850051f7865e63860e3,Namespace:kube-system,Attempt:0,} returns sandbox id \"100d50d6a8a4a97423460fb6f5196d773c244c15349b585783de8f2ea721c455\"" Apr 13 19:20:18.204061 containerd[1489]: time="2026-04-13T19:20:18.203580153Z" level=info msg="CreateContainer within sandbox \"72c21b8efb400c0fba4c1dd7106509494e23fc6b03bea8062e067327b2c71772\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Apr 13 19:20:18.207266 containerd[1489]: time="2026-04-13T19:20:18.207185338Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-7-b-7ea64c4796,Uid:2d45fe8b65e9e27364b556dd790423ac,Namespace:kube-system,Attempt:0,} returns sandbox id \"80405b492dac2e5f0ce667509ac71944b0b9f2bdec023ecb0494b201667a7d2b\"" Apr 13 19:20:18.208289 containerd[1489]: time="2026-04-13T19:20:18.208058160Z" level=info msg="CreateContainer within sandbox \"100d50d6a8a4a97423460fb6f5196d773c244c15349b585783de8f2ea721c455\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Apr 13 19:20:18.216223 containerd[1489]: time="2026-04-13T19:20:18.216077747Z" level=info msg="CreateContainer within sandbox \"80405b492dac2e5f0ce667509ac71944b0b9f2bdec023ecb0494b201667a7d2b\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Apr 13 19:20:18.232190 kubelet[2213]: E0413 19:20:18.232113 2213 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://178.105.12.165:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-7-b-7ea64c4796&limit=500&resourceVersion=0\": dial tcp 178.105.12.165:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 13 19:20:18.241840 containerd[1489]: time="2026-04-13T19:20:18.241776360Z" level=info msg="CreateContainer within sandbox \"72c21b8efb400c0fba4c1dd7106509494e23fc6b03bea8062e067327b2c71772\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"b85ffc0844c146f1eaafb8715aef337e50a6704bc949081eb18a9d85b58bbb80\"" Apr 13 19:20:18.242750 containerd[1489]: time="2026-04-13T19:20:18.242713789Z" level=info msg="StartContainer for \"b85ffc0844c146f1eaafb8715aef337e50a6704bc949081eb18a9d85b58bbb80\"" Apr 13 19:20:18.247247 containerd[1489]: time="2026-04-13T19:20:18.246751535Z" level=info msg="CreateContainer within sandbox \"100d50d6a8a4a97423460fb6f5196d773c244c15349b585783de8f2ea721c455\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"a4e065e4adaac5396364c53503f10a379d9cfab47ae75ac1ec449e88106e1dfd\"" Apr 13 19:20:18.248078 containerd[1489]: time="2026-04-13T19:20:18.247751327Z" level=info msg="StartContainer for \"a4e065e4adaac5396364c53503f10a379d9cfab47ae75ac1ec449e88106e1dfd\"" Apr 13 19:20:18.253003 containerd[1489]: time="2026-04-13T19:20:18.252866466Z" level=info msg="CreateContainer within sandbox \"80405b492dac2e5f0ce667509ac71944b0b9f2bdec023ecb0494b201667a7d2b\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"104ed51357418a2063a660e95cd8a91b9a76d81a2f8e6ab3961a835cde84830f\"" Apr 13 19:20:18.255679 containerd[1489]: time="2026-04-13T19:20:18.254354924Z" level=info msg="StartContainer for \"104ed51357418a2063a660e95cd8a91b9a76d81a2f8e6ab3961a835cde84830f\"" Apr 13 19:20:18.277318 systemd[1]: Started cri-containerd-b85ffc0844c146f1eaafb8715aef337e50a6704bc949081eb18a9d85b58bbb80.scope - libcontainer container b85ffc0844c146f1eaafb8715aef337e50a6704bc949081eb18a9d85b58bbb80. Apr 13 19:20:18.292804 kubelet[2213]: E0413 19:20:18.292690 2213 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://178.105.12.165:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-7-b-7ea64c4796?timeout=10s\": dial tcp 178.105.12.165:6443: connect: connection refused" interval="1.6s" Apr 13 19:20:18.310430 systemd[1]: Started cri-containerd-104ed51357418a2063a660e95cd8a91b9a76d81a2f8e6ab3961a835cde84830f.scope - libcontainer container 104ed51357418a2063a660e95cd8a91b9a76d81a2f8e6ab3961a835cde84830f. Apr 13 19:20:18.313687 systemd[1]: Started cri-containerd-a4e065e4adaac5396364c53503f10a379d9cfab47ae75ac1ec449e88106e1dfd.scope - libcontainer container a4e065e4adaac5396364c53503f10a379d9cfab47ae75ac1ec449e88106e1dfd. Apr 13 19:20:18.366867 containerd[1489]: time="2026-04-13T19:20:18.365956482Z" level=info msg="StartContainer for \"b85ffc0844c146f1eaafb8715aef337e50a6704bc949081eb18a9d85b58bbb80\" returns successfully" Apr 13 19:20:18.389169 containerd[1489]: time="2026-04-13T19:20:18.389109295Z" level=info msg="StartContainer for \"104ed51357418a2063a660e95cd8a91b9a76d81a2f8e6ab3961a835cde84830f\" returns successfully" Apr 13 19:20:18.398254 containerd[1489]: time="2026-04-13T19:20:18.398187025Z" level=info msg="StartContainer for \"a4e065e4adaac5396364c53503f10a379d9cfab47ae75ac1ec449e88106e1dfd\" returns successfully" Apr 13 19:20:18.500732 kubelet[2213]: I0413 19:20:18.500695 2213 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-7-b-7ea64c4796" Apr 13 19:20:18.502128 kubelet[2213]: E0413 19:20:18.502070 2213 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://178.105.12.165:6443/api/v1/nodes\": dial tcp 178.105.12.165:6443: connect: connection refused" node="ci-4081-3-7-b-7ea64c4796" Apr 13 19:20:18.942037 kubelet[2213]: E0413 19:20:18.940793 2213 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-7-b-7ea64c4796\" not found" node="ci-4081-3-7-b-7ea64c4796" Apr 13 19:20:18.948394 kubelet[2213]: E0413 19:20:18.948366 2213 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-7-b-7ea64c4796\" not found" node="ci-4081-3-7-b-7ea64c4796" Apr 13 19:20:18.951701 kubelet[2213]: E0413 19:20:18.951674 2213 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-7-b-7ea64c4796\" not found" node="ci-4081-3-7-b-7ea64c4796" Apr 13 19:20:19.954960 kubelet[2213]: E0413 19:20:19.954903 2213 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-7-b-7ea64c4796\" not found" node="ci-4081-3-7-b-7ea64c4796" Apr 13 19:20:19.955419 kubelet[2213]: E0413 19:20:19.955396 2213 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-7-b-7ea64c4796\" not found" node="ci-4081-3-7-b-7ea64c4796" Apr 13 19:20:20.107040 kubelet[2213]: I0413 19:20:20.105586 2213 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-7-b-7ea64c4796" Apr 13 19:20:20.466760 kubelet[2213]: E0413 19:20:20.466720 2213 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081-3-7-b-7ea64c4796\" not found" node="ci-4081-3-7-b-7ea64c4796" Apr 13 19:20:20.577450 kubelet[2213]: I0413 19:20:20.577398 2213 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081-3-7-b-7ea64c4796" Apr 13 19:20:20.577450 kubelet[2213]: E0413 19:20:20.577444 2213 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"ci-4081-3-7-b-7ea64c4796\": node \"ci-4081-3-7-b-7ea64c4796\" not found" Apr 13 19:20:20.610238 kubelet[2213]: E0413 19:20:20.610196 2213 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4081-3-7-b-7ea64c4796\" not found" Apr 13 19:20:20.710513 kubelet[2213]: E0413 19:20:20.710457 2213 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4081-3-7-b-7ea64c4796\" not found" Apr 13 19:20:20.811598 kubelet[2213]: E0413 19:20:20.810998 2213 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4081-3-7-b-7ea64c4796\" not found" Apr 13 19:20:20.866450 kubelet[2213]: I0413 19:20:20.866394 2213 apiserver.go:52] "Watching apiserver" Apr 13 19:20:20.886667 kubelet[2213]: I0413 19:20:20.886560 2213 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Apr 13 19:20:20.887827 kubelet[2213]: I0413 19:20:20.887782 2213 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-7-b-7ea64c4796" Apr 13 19:20:20.896455 kubelet[2213]: E0413 19:20:20.896198 2213 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081-3-7-b-7ea64c4796\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4081-3-7-b-7ea64c4796" Apr 13 19:20:20.896455 kubelet[2213]: I0413 19:20:20.896231 2213 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081-3-7-b-7ea64c4796" Apr 13 19:20:20.899742 kubelet[2213]: E0413 19:20:20.899634 2213 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081-3-7-b-7ea64c4796\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4081-3-7-b-7ea64c4796" Apr 13 19:20:20.899742 kubelet[2213]: I0413 19:20:20.899680 2213 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-7-b-7ea64c4796" Apr 13 19:20:20.902568 kubelet[2213]: E0413 19:20:20.902514 2213 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081-3-7-b-7ea64c4796\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4081-3-7-b-7ea64c4796" Apr 13 19:20:21.683052 kubelet[2213]: I0413 19:20:21.682863 2213 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-7-b-7ea64c4796" Apr 13 19:20:22.875956 systemd[1]: Reloading requested from client PID 2504 ('systemctl') (unit session-7.scope)... Apr 13 19:20:22.875981 systemd[1]: Reloading... Apr 13 19:20:22.983051 zram_generator::config[2544]: No configuration found. Apr 13 19:20:23.098372 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 13 19:20:23.184458 systemd[1]: Reloading finished in 308 ms. Apr 13 19:20:23.231161 kubelet[2213]: I0413 19:20:23.231097 2213 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 13 19:20:23.233350 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 19:20:23.247039 systemd[1]: kubelet.service: Deactivated successfully. Apr 13 19:20:23.247670 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 19:20:23.247922 systemd[1]: kubelet.service: Consumed 1.077s CPU time, 123.0M memory peak, 0B memory swap peak. Apr 13 19:20:23.253352 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 19:20:23.427417 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 19:20:23.430627 (kubelet)[2589]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 13 19:20:23.492464 kubelet[2589]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 13 19:20:23.492464 kubelet[2589]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 13 19:20:23.492464 kubelet[2589]: I0413 19:20:23.491084 2589 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 13 19:20:23.504812 kubelet[2589]: I0413 19:20:23.504760 2589 server.go:529] "Kubelet version" kubeletVersion="v1.34.4" Apr 13 19:20:23.504984 kubelet[2589]: I0413 19:20:23.504973 2589 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 13 19:20:23.505085 kubelet[2589]: I0413 19:20:23.505075 2589 watchdog_linux.go:95] "Systemd watchdog is not enabled" Apr 13 19:20:23.505157 kubelet[2589]: I0413 19:20:23.505144 2589 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 13 19:20:23.505483 kubelet[2589]: I0413 19:20:23.505465 2589 server.go:956] "Client rotation is on, will bootstrap in background" Apr 13 19:20:23.507449 kubelet[2589]: I0413 19:20:23.507080 2589 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Apr 13 19:20:23.513092 kubelet[2589]: I0413 19:20:23.512569 2589 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 13 19:20:23.515721 kubelet[2589]: E0413 19:20:23.515689 2589 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 13 19:20:23.516285 kubelet[2589]: I0413 19:20:23.516034 2589 server.go:1400] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Apr 13 19:20:23.519519 kubelet[2589]: I0413 19:20:23.519462 2589 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Apr 13 19:20:23.520332 kubelet[2589]: I0413 19:20:23.519921 2589 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 13 19:20:23.520332 kubelet[2589]: I0413 19:20:23.519947 2589 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-7-b-7ea64c4796","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 13 19:20:23.520332 kubelet[2589]: I0413 19:20:23.520148 2589 topology_manager.go:138] "Creating topology manager with none policy" Apr 13 19:20:23.520332 kubelet[2589]: I0413 19:20:23.520158 2589 container_manager_linux.go:306] "Creating device plugin manager" Apr 13 19:20:23.520525 kubelet[2589]: I0413 19:20:23.520185 2589 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Apr 13 19:20:23.520670 kubelet[2589]: I0413 19:20:23.520658 2589 state_mem.go:36] "Initialized new in-memory state store" Apr 13 19:20:23.520983 kubelet[2589]: I0413 19:20:23.520935 2589 kubelet.go:475] "Attempting to sync node with API server" Apr 13 19:20:23.520983 kubelet[2589]: I0413 19:20:23.520957 2589 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 13 19:20:23.523059 kubelet[2589]: I0413 19:20:23.521767 2589 kubelet.go:387] "Adding apiserver pod source" Apr 13 19:20:23.523059 kubelet[2589]: I0413 19:20:23.521803 2589 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 13 19:20:23.527248 kubelet[2589]: I0413 19:20:23.527213 2589 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 13 19:20:23.530773 kubelet[2589]: I0413 19:20:23.530736 2589 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 13 19:20:23.530995 kubelet[2589]: I0413 19:20:23.530982 2589 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Apr 13 19:20:23.539989 kubelet[2589]: I0413 19:20:23.539963 2589 server.go:1262] "Started kubelet" Apr 13 19:20:23.542219 kubelet[2589]: I0413 19:20:23.540283 2589 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 13 19:20:23.542219 kubelet[2589]: I0413 19:20:23.542202 2589 server_v1.go:49] "podresources" method="list" useActivePods=true Apr 13 19:20:23.542475 kubelet[2589]: I0413 19:20:23.542445 2589 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 13 19:20:23.542531 kubelet[2589]: I0413 19:20:23.542445 2589 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 13 19:20:23.549714 kubelet[2589]: I0413 19:20:23.541948 2589 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 13 19:20:23.569139 kubelet[2589]: I0413 19:20:23.569094 2589 volume_manager.go:313] "Starting Kubelet Volume Manager" Apr 13 19:20:23.569763 kubelet[2589]: I0413 19:20:23.541975 2589 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 13 19:20:23.570599 kubelet[2589]: E0413 19:20:23.570541 2589 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4081-3-7-b-7ea64c4796\" not found" Apr 13 19:20:23.571556 kubelet[2589]: I0413 19:20:23.571536 2589 server.go:310] "Adding debug handlers to kubelet server" Apr 13 19:20:23.574101 kubelet[2589]: I0413 19:20:23.573701 2589 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Apr 13 19:20:23.574101 kubelet[2589]: I0413 19:20:23.573834 2589 reconciler.go:29] "Reconciler: start to sync state" Apr 13 19:20:23.575532 kubelet[2589]: I0413 19:20:23.575508 2589 factory.go:223] Registration of the systemd container factory successfully Apr 13 19:20:23.576034 kubelet[2589]: I0413 19:20:23.575790 2589 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 13 19:20:23.577680 kubelet[2589]: E0413 19:20:23.577655 2589 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 13 19:20:23.578470 kubelet[2589]: I0413 19:20:23.578437 2589 factory.go:223] Registration of the containerd container factory successfully Apr 13 19:20:23.586061 kubelet[2589]: I0413 19:20:23.585199 2589 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Apr 13 19:20:23.588946 kubelet[2589]: I0413 19:20:23.588868 2589 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Apr 13 19:20:23.588946 kubelet[2589]: I0413 19:20:23.588912 2589 status_manager.go:244] "Starting to sync pod status with apiserver" Apr 13 19:20:23.588946 kubelet[2589]: I0413 19:20:23.588943 2589 kubelet.go:2428] "Starting kubelet main sync loop" Apr 13 19:20:23.589193 kubelet[2589]: E0413 19:20:23.589005 2589 kubelet.go:2452] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 13 19:20:23.679044 kubelet[2589]: I0413 19:20:23.678673 2589 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 13 19:20:23.679044 kubelet[2589]: I0413 19:20:23.678692 2589 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 13 19:20:23.679044 kubelet[2589]: I0413 19:20:23.678716 2589 state_mem.go:36] "Initialized new in-memory state store" Apr 13 19:20:23.679044 kubelet[2589]: I0413 19:20:23.678856 2589 state_mem.go:88] "Updated default CPUSet" cpuSet="" Apr 13 19:20:23.679044 kubelet[2589]: I0413 19:20:23.678866 2589 state_mem.go:96] "Updated CPUSet assignments" assignments={} Apr 13 19:20:23.679044 kubelet[2589]: I0413 19:20:23.678882 2589 policy_none.go:49] "None policy: Start" Apr 13 19:20:23.679044 kubelet[2589]: I0413 19:20:23.678890 2589 memory_manager.go:187] "Starting memorymanager" policy="None" Apr 13 19:20:23.679044 kubelet[2589]: I0413 19:20:23.678898 2589 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Apr 13 19:20:23.679044 kubelet[2589]: I0413 19:20:23.678985 2589 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Apr 13 19:20:23.680746 kubelet[2589]: I0413 19:20:23.678993 2589 policy_none.go:47] "Start" Apr 13 19:20:23.685923 kubelet[2589]: E0413 19:20:23.685893 2589 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 13 19:20:23.686512 kubelet[2589]: I0413 19:20:23.686488 2589 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 13 19:20:23.686906 kubelet[2589]: I0413 19:20:23.686831 2589 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 13 19:20:23.688865 kubelet[2589]: I0413 19:20:23.687888 2589 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 13 19:20:23.689829 kubelet[2589]: I0413 19:20:23.689787 2589 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-7-b-7ea64c4796" Apr 13 19:20:23.691353 kubelet[2589]: I0413 19:20:23.690504 2589 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-7-b-7ea64c4796" Apr 13 19:20:23.691353 kubelet[2589]: I0413 19:20:23.690851 2589 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081-3-7-b-7ea64c4796" Apr 13 19:20:23.694836 kubelet[2589]: E0413 19:20:23.694543 2589 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 13 19:20:23.714222 kubelet[2589]: E0413 19:20:23.714114 2589 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081-3-7-b-7ea64c4796\" already exists" pod="kube-system/kube-scheduler-ci-4081-3-7-b-7ea64c4796" Apr 13 19:20:23.775349 kubelet[2589]: I0413 19:20:23.775066 2589 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f7f094c83c371850051f7865e63860e3-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-7-b-7ea64c4796\" (UID: \"f7f094c83c371850051f7865e63860e3\") " pod="kube-system/kube-controller-manager-ci-4081-3-7-b-7ea64c4796" Apr 13 19:20:23.777297 kubelet[2589]: I0413 19:20:23.777189 2589 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/36edd602cbd20ad738302bdb873875ec-ca-certs\") pod \"kube-apiserver-ci-4081-3-7-b-7ea64c4796\" (UID: \"36edd602cbd20ad738302bdb873875ec\") " pod="kube-system/kube-apiserver-ci-4081-3-7-b-7ea64c4796" Apr 13 19:20:23.777672 kubelet[2589]: I0413 19:20:23.777310 2589 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/36edd602cbd20ad738302bdb873875ec-k8s-certs\") pod \"kube-apiserver-ci-4081-3-7-b-7ea64c4796\" (UID: \"36edd602cbd20ad738302bdb873875ec\") " pod="kube-system/kube-apiserver-ci-4081-3-7-b-7ea64c4796" Apr 13 19:20:23.777672 kubelet[2589]: I0413 19:20:23.777363 2589 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/36edd602cbd20ad738302bdb873875ec-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-7-b-7ea64c4796\" (UID: \"36edd602cbd20ad738302bdb873875ec\") " pod="kube-system/kube-apiserver-ci-4081-3-7-b-7ea64c4796" Apr 13 19:20:23.777672 kubelet[2589]: I0413 19:20:23.777411 2589 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f7f094c83c371850051f7865e63860e3-ca-certs\") pod \"kube-controller-manager-ci-4081-3-7-b-7ea64c4796\" (UID: \"f7f094c83c371850051f7865e63860e3\") " pod="kube-system/kube-controller-manager-ci-4081-3-7-b-7ea64c4796" Apr 13 19:20:23.777672 kubelet[2589]: I0413 19:20:23.777457 2589 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2d45fe8b65e9e27364b556dd790423ac-kubeconfig\") pod \"kube-scheduler-ci-4081-3-7-b-7ea64c4796\" (UID: \"2d45fe8b65e9e27364b556dd790423ac\") " pod="kube-system/kube-scheduler-ci-4081-3-7-b-7ea64c4796" Apr 13 19:20:23.777672 kubelet[2589]: I0413 19:20:23.777524 2589 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/f7f094c83c371850051f7865e63860e3-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-7-b-7ea64c4796\" (UID: \"f7f094c83c371850051f7865e63860e3\") " pod="kube-system/kube-controller-manager-ci-4081-3-7-b-7ea64c4796" Apr 13 19:20:23.777962 kubelet[2589]: I0413 19:20:23.777560 2589 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f7f094c83c371850051f7865e63860e3-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-7-b-7ea64c4796\" (UID: \"f7f094c83c371850051f7865e63860e3\") " pod="kube-system/kube-controller-manager-ci-4081-3-7-b-7ea64c4796" Apr 13 19:20:23.777962 kubelet[2589]: I0413 19:20:23.777609 2589 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f7f094c83c371850051f7865e63860e3-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-7-b-7ea64c4796\" (UID: \"f7f094c83c371850051f7865e63860e3\") " pod="kube-system/kube-controller-manager-ci-4081-3-7-b-7ea64c4796" Apr 13 19:20:23.808601 kubelet[2589]: I0413 19:20:23.807226 2589 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-7-b-7ea64c4796" Apr 13 19:20:23.821602 kubelet[2589]: I0413 19:20:23.821512 2589 kubelet_node_status.go:124] "Node was previously registered" node="ci-4081-3-7-b-7ea64c4796" Apr 13 19:20:23.821749 kubelet[2589]: I0413 19:20:23.821632 2589 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081-3-7-b-7ea64c4796" Apr 13 19:20:24.525125 kubelet[2589]: I0413 19:20:24.524973 2589 apiserver.go:52] "Watching apiserver" Apr 13 19:20:24.575071 kubelet[2589]: I0413 19:20:24.574521 2589 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Apr 13 19:20:24.647074 kubelet[2589]: I0413 19:20:24.645269 2589 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-7-b-7ea64c4796" Apr 13 19:20:24.662048 kubelet[2589]: E0413 19:20:24.661362 2589 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081-3-7-b-7ea64c4796\" already exists" pod="kube-system/kube-apiserver-ci-4081-3-7-b-7ea64c4796" Apr 13 19:20:24.702112 kubelet[2589]: I0413 19:20:24.701717 2589 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081-3-7-b-7ea64c4796" podStartSLOduration=3.701698384 podStartE2EDuration="3.701698384s" podCreationTimestamp="2026-04-13 19:20:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-13 19:20:24.677893052 +0000 UTC m=+1.241604218" watchObservedRunningTime="2026-04-13 19:20:24.701698384 +0000 UTC m=+1.265409550" Apr 13 19:20:24.721045 kubelet[2589]: I0413 19:20:24.720131 2589 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081-3-7-b-7ea64c4796" podStartSLOduration=1.720108151 podStartE2EDuration="1.720108151s" podCreationTimestamp="2026-04-13 19:20:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-13 19:20:24.703220878 +0000 UTC m=+1.266932044" watchObservedRunningTime="2026-04-13 19:20:24.720108151 +0000 UTC m=+1.283819317" Apr 13 19:20:24.740627 kubelet[2589]: I0413 19:20:24.740563 2589 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081-3-7-b-7ea64c4796" podStartSLOduration=1.740545319 podStartE2EDuration="1.740545319s" podCreationTimestamp="2026-04-13 19:20:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-13 19:20:24.721554155 +0000 UTC m=+1.285265321" watchObservedRunningTime="2026-04-13 19:20:24.740545319 +0000 UTC m=+1.304256526" Apr 13 19:20:28.468866 kubelet[2589]: I0413 19:20:28.468831 2589 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Apr 13 19:20:28.470317 containerd[1489]: time="2026-04-13T19:20:28.470173627Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 13 19:20:28.472408 kubelet[2589]: I0413 19:20:28.470858 2589 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Apr 13 19:20:29.301170 systemd[1]: Created slice kubepods-besteffort-pod43bd72e6_6e71_4259_a824_19b5d4daca26.slice - libcontainer container kubepods-besteffort-pod43bd72e6_6e71_4259_a824_19b5d4daca26.slice. Apr 13 19:20:29.321821 kubelet[2589]: I0413 19:20:29.321704 2589 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/43bd72e6-6e71-4259-a824-19b5d4daca26-xtables-lock\") pod \"kube-proxy-27xrf\" (UID: \"43bd72e6-6e71-4259-a824-19b5d4daca26\") " pod="kube-system/kube-proxy-27xrf" Apr 13 19:20:29.321821 kubelet[2589]: I0413 19:20:29.321743 2589 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/43bd72e6-6e71-4259-a824-19b5d4daca26-lib-modules\") pod \"kube-proxy-27xrf\" (UID: \"43bd72e6-6e71-4259-a824-19b5d4daca26\") " pod="kube-system/kube-proxy-27xrf" Apr 13 19:20:29.321821 kubelet[2589]: I0413 19:20:29.321760 2589 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lfs7s\" (UniqueName: \"kubernetes.io/projected/43bd72e6-6e71-4259-a824-19b5d4daca26-kube-api-access-lfs7s\") pod \"kube-proxy-27xrf\" (UID: \"43bd72e6-6e71-4259-a824-19b5d4daca26\") " pod="kube-system/kube-proxy-27xrf" Apr 13 19:20:29.321821 kubelet[2589]: I0413 19:20:29.321807 2589 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/43bd72e6-6e71-4259-a824-19b5d4daca26-kube-proxy\") pod \"kube-proxy-27xrf\" (UID: \"43bd72e6-6e71-4259-a824-19b5d4daca26\") " pod="kube-system/kube-proxy-27xrf" Apr 13 19:20:29.617779 containerd[1489]: time="2026-04-13T19:20:29.617280812Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-27xrf,Uid:43bd72e6-6e71-4259-a824-19b5d4daca26,Namespace:kube-system,Attempt:0,}" Apr 13 19:20:29.653364 containerd[1489]: time="2026-04-13T19:20:29.652523716Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 19:20:29.653364 containerd[1489]: time="2026-04-13T19:20:29.652594046Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 19:20:29.653364 containerd[1489]: time="2026-04-13T19:20:29.652607736Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:20:29.653364 containerd[1489]: time="2026-04-13T19:20:29.652701562Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:20:29.695240 systemd[1]: Started cri-containerd-03018de665b33316ebb9364796cc7d03668d97b13322be17ca3d447fb204fe36.scope - libcontainer container 03018de665b33316ebb9364796cc7d03668d97b13322be17ca3d447fb204fe36. Apr 13 19:20:29.742589 systemd[1]: Created slice kubepods-besteffort-pod8fc6c9eb_1e78_46c8_bef4_774023eef2f7.slice - libcontainer container kubepods-besteffort-pod8fc6c9eb_1e78_46c8_bef4_774023eef2f7.slice. Apr 13 19:20:29.749753 containerd[1489]: time="2026-04-13T19:20:29.749705630Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-27xrf,Uid:43bd72e6-6e71-4259-a824-19b5d4daca26,Namespace:kube-system,Attempt:0,} returns sandbox id \"03018de665b33316ebb9364796cc7d03668d97b13322be17ca3d447fb204fe36\"" Apr 13 19:20:29.756329 containerd[1489]: time="2026-04-13T19:20:29.756267507Z" level=info msg="CreateContainer within sandbox \"03018de665b33316ebb9364796cc7d03668d97b13322be17ca3d447fb204fe36\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Apr 13 19:20:29.780330 containerd[1489]: time="2026-04-13T19:20:29.780277274Z" level=info msg="CreateContainer within sandbox \"03018de665b33316ebb9364796cc7d03668d97b13322be17ca3d447fb204fe36\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"efa1fbacb3c3547facede9377532a89ff5e24e9968cc71e4f5536c70fb74fc0b\"" Apr 13 19:20:29.782564 containerd[1489]: time="2026-04-13T19:20:29.781593924Z" level=info msg="StartContainer for \"efa1fbacb3c3547facede9377532a89ff5e24e9968cc71e4f5536c70fb74fc0b\"" Apr 13 19:20:29.812405 systemd[1]: Started cri-containerd-efa1fbacb3c3547facede9377532a89ff5e24e9968cc71e4f5536c70fb74fc0b.scope - libcontainer container efa1fbacb3c3547facede9377532a89ff5e24e9968cc71e4f5536c70fb74fc0b. Apr 13 19:20:29.826783 kubelet[2589]: I0413 19:20:29.826719 2589 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/8fc6c9eb-1e78-46c8-bef4-774023eef2f7-var-lib-calico\") pod \"tigera-operator-5588576f44-wggt4\" (UID: \"8fc6c9eb-1e78-46c8-bef4-774023eef2f7\") " pod="tigera-operator/tigera-operator-5588576f44-wggt4" Apr 13 19:20:29.827256 kubelet[2589]: I0413 19:20:29.826810 2589 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l2f4g\" (UniqueName: \"kubernetes.io/projected/8fc6c9eb-1e78-46c8-bef4-774023eef2f7-kube-api-access-l2f4g\") pod \"tigera-operator-5588576f44-wggt4\" (UID: \"8fc6c9eb-1e78-46c8-bef4-774023eef2f7\") " pod="tigera-operator/tigera-operator-5588576f44-wggt4" Apr 13 19:20:29.843470 containerd[1489]: time="2026-04-13T19:20:29.843412368Z" level=info msg="StartContainer for \"efa1fbacb3c3547facede9377532a89ff5e24e9968cc71e4f5536c70fb74fc0b\" returns successfully" Apr 13 19:20:30.052847 containerd[1489]: time="2026-04-13T19:20:30.052417596Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5588576f44-wggt4,Uid:8fc6c9eb-1e78-46c8-bef4-774023eef2f7,Namespace:tigera-operator,Attempt:0,}" Apr 13 19:20:30.083938 containerd[1489]: time="2026-04-13T19:20:30.083426448Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 19:20:30.083938 containerd[1489]: time="2026-04-13T19:20:30.083666643Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 19:20:30.083938 containerd[1489]: time="2026-04-13T19:20:30.083683539Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:20:30.083938 containerd[1489]: time="2026-04-13T19:20:30.083777391Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:20:30.104893 systemd[1]: Started cri-containerd-05ce3559ae214f408864b80c4c99c5a3874e6a5574be27b799a92dee6fa0582a.scope - libcontainer container 05ce3559ae214f408864b80c4c99c5a3874e6a5574be27b799a92dee6fa0582a. Apr 13 19:20:30.158615 containerd[1489]: time="2026-04-13T19:20:30.158468306Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5588576f44-wggt4,Uid:8fc6c9eb-1e78-46c8-bef4-774023eef2f7,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"05ce3559ae214f408864b80c4c99c5a3874e6a5574be27b799a92dee6fa0582a\"" Apr 13 19:20:30.161152 containerd[1489]: time="2026-04-13T19:20:30.160865014Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\"" Apr 13 19:20:31.463184 kubelet[2589]: I0413 19:20:31.463095 2589 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-27xrf" podStartSLOduration=2.463076529 podStartE2EDuration="2.463076529s" podCreationTimestamp="2026-04-13 19:20:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-13 19:20:30.697986611 +0000 UTC m=+7.261697777" watchObservedRunningTime="2026-04-13 19:20:31.463076529 +0000 UTC m=+8.026787695" Apr 13 19:20:32.090436 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2568073886.mount: Deactivated successfully. Apr 13 19:20:32.910649 containerd[1489]: time="2026-04-13T19:20:32.910575374Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.40.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:20:32.912510 containerd[1489]: time="2026-04-13T19:20:32.912454576Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.40.7: active requests=0, bytes read=25071565" Apr 13 19:20:32.913820 containerd[1489]: time="2026-04-13T19:20:32.913782057Z" level=info msg="ImageCreate event name:\"sha256:b2fef69c2456aa0a6f6dcb63425a69d11dc35a73b1883b250e4d92f5a697fefe\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:20:32.917343 containerd[1489]: time="2026-04-13T19:20:32.916946062Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:20:32.917930 containerd[1489]: time="2026-04-13T19:20:32.917885884Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.40.7\" with image id \"sha256:b2fef69c2456aa0a6f6dcb63425a69d11dc35a73b1883b250e4d92f5a697fefe\", repo tag \"quay.io/tigera/operator:v1.40.7\", repo digest \"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\", size \"25067560\" in 2.756976108s" Apr 13 19:20:32.917930 containerd[1489]: time="2026-04-13T19:20:32.917926239Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\" returns image reference \"sha256:b2fef69c2456aa0a6f6dcb63425a69d11dc35a73b1883b250e4d92f5a697fefe\"" Apr 13 19:20:32.925304 containerd[1489]: time="2026-04-13T19:20:32.925228062Z" level=info msg="CreateContainer within sandbox \"05ce3559ae214f408864b80c4c99c5a3874e6a5574be27b799a92dee6fa0582a\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Apr 13 19:20:32.941417 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3535237233.mount: Deactivated successfully. Apr 13 19:20:32.948276 containerd[1489]: time="2026-04-13T19:20:32.948219999Z" level=info msg="CreateContainer within sandbox \"05ce3559ae214f408864b80c4c99c5a3874e6a5574be27b799a92dee6fa0582a\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"4d925d464a991ccb037dc223aa05742b8cd8d61699c9f159062309742cae4053\"" Apr 13 19:20:32.951745 containerd[1489]: time="2026-04-13T19:20:32.949354070Z" level=info msg="StartContainer for \"4d925d464a991ccb037dc223aa05742b8cd8d61699c9f159062309742cae4053\"" Apr 13 19:20:32.983299 systemd[1]: Started cri-containerd-4d925d464a991ccb037dc223aa05742b8cd8d61699c9f159062309742cae4053.scope - libcontainer container 4d925d464a991ccb037dc223aa05742b8cd8d61699c9f159062309742cae4053. Apr 13 19:20:33.018937 containerd[1489]: time="2026-04-13T19:20:33.018551318Z" level=info msg="StartContainer for \"4d925d464a991ccb037dc223aa05742b8cd8d61699c9f159062309742cae4053\" returns successfully" Apr 13 19:20:33.730945 kubelet[2589]: I0413 19:20:33.730121 2589 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-5588576f44-wggt4" podStartSLOduration=1.970694057 podStartE2EDuration="4.730104525s" podCreationTimestamp="2026-04-13 19:20:29 +0000 UTC" firstStartedPulling="2026-04-13 19:20:30.160213135 +0000 UTC m=+6.723924301" lastFinishedPulling="2026-04-13 19:20:32.919623643 +0000 UTC m=+9.483334769" observedRunningTime="2026-04-13 19:20:33.710037823 +0000 UTC m=+10.273748989" watchObservedRunningTime="2026-04-13 19:20:33.730104525 +0000 UTC m=+10.293815731" Apr 13 19:20:37.691282 sudo[1717]: pam_unix(sudo:session): session closed for user root Apr 13 19:20:37.708944 sshd[1714]: pam_unix(sshd:session): session closed for user core Apr 13 19:20:37.717869 systemd[1]: sshd@6-178.105.12.165:22-50.85.169.122:50218.service: Deactivated successfully. Apr 13 19:20:37.722420 systemd[1]: session-7.scope: Deactivated successfully. Apr 13 19:20:37.723509 systemd[1]: session-7.scope: Consumed 7.555s CPU time, 152.6M memory peak, 0B memory swap peak. Apr 13 19:20:37.724828 systemd-logind[1461]: Session 7 logged out. Waiting for processes to exit. Apr 13 19:20:37.729929 systemd-logind[1461]: Removed session 7. Apr 13 19:20:44.602525 systemd[1]: Created slice kubepods-besteffort-pod6928d7dc_e9f9_47d6_8bc4_8b430b1449cf.slice - libcontainer container kubepods-besteffort-pod6928d7dc_e9f9_47d6_8bc4_8b430b1449cf.slice. Apr 13 19:20:44.627798 kubelet[2589]: I0413 19:20:44.627686 2589 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/6928d7dc-e9f9-47d6-8bc4-8b430b1449cf-typha-certs\") pod \"calico-typha-7978cf7dbc-78fw2\" (UID: \"6928d7dc-e9f9-47d6-8bc4-8b430b1449cf\") " pod="calico-system/calico-typha-7978cf7dbc-78fw2" Apr 13 19:20:44.627798 kubelet[2589]: I0413 19:20:44.627736 2589 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6928d7dc-e9f9-47d6-8bc4-8b430b1449cf-tigera-ca-bundle\") pod \"calico-typha-7978cf7dbc-78fw2\" (UID: \"6928d7dc-e9f9-47d6-8bc4-8b430b1449cf\") " pod="calico-system/calico-typha-7978cf7dbc-78fw2" Apr 13 19:20:44.627798 kubelet[2589]: I0413 19:20:44.627755 2589 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bprpp\" (UniqueName: \"kubernetes.io/projected/6928d7dc-e9f9-47d6-8bc4-8b430b1449cf-kube-api-access-bprpp\") pod \"calico-typha-7978cf7dbc-78fw2\" (UID: \"6928d7dc-e9f9-47d6-8bc4-8b430b1449cf\") " pod="calico-system/calico-typha-7978cf7dbc-78fw2" Apr 13 19:20:44.805466 systemd[1]: Created slice kubepods-besteffort-podc80d8608_9891_426d_bc9b_9a10e8b79a09.slice - libcontainer container kubepods-besteffort-podc80d8608_9891_426d_bc9b_9a10e8b79a09.slice. Apr 13 19:20:44.830135 kubelet[2589]: I0413 19:20:44.830084 2589 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/c80d8608-9891-426d-bc9b-9a10e8b79a09-cni-net-dir\") pod \"calico-node-dgb58\" (UID: \"c80d8608-9891-426d-bc9b-9a10e8b79a09\") " pod="calico-system/calico-node-dgb58" Apr 13 19:20:44.830582 kubelet[2589]: I0413 19:20:44.830355 2589 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/c80d8608-9891-426d-bc9b-9a10e8b79a09-policysync\") pod \"calico-node-dgb58\" (UID: \"c80d8608-9891-426d-bc9b-9a10e8b79a09\") " pod="calico-system/calico-node-dgb58" Apr 13 19:20:44.830582 kubelet[2589]: I0413 19:20:44.830385 2589 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/c80d8608-9891-426d-bc9b-9a10e8b79a09-flexvol-driver-host\") pod \"calico-node-dgb58\" (UID: \"c80d8608-9891-426d-bc9b-9a10e8b79a09\") " pod="calico-system/calico-node-dgb58" Apr 13 19:20:44.830582 kubelet[2589]: I0413 19:20:44.830408 2589 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/c80d8608-9891-426d-bc9b-9a10e8b79a09-var-run-calico\") pod \"calico-node-dgb58\" (UID: \"c80d8608-9891-426d-bc9b-9a10e8b79a09\") " pod="calico-system/calico-node-dgb58" Apr 13 19:20:44.830582 kubelet[2589]: I0413 19:20:44.830425 2589 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c80d8608-9891-426d-bc9b-9a10e8b79a09-xtables-lock\") pod \"calico-node-dgb58\" (UID: \"c80d8608-9891-426d-bc9b-9a10e8b79a09\") " pod="calico-system/calico-node-dgb58" Apr 13 19:20:44.830582 kubelet[2589]: I0413 19:20:44.830442 2589 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nodeproc\" (UniqueName: \"kubernetes.io/host-path/c80d8608-9891-426d-bc9b-9a10e8b79a09-nodeproc\") pod \"calico-node-dgb58\" (UID: \"c80d8608-9891-426d-bc9b-9a10e8b79a09\") " pod="calico-system/calico-node-dgb58" Apr 13 19:20:44.830794 kubelet[2589]: I0413 19:20:44.830459 2589 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c80d8608-9891-426d-bc9b-9a10e8b79a09-tigera-ca-bundle\") pod \"calico-node-dgb58\" (UID: \"c80d8608-9891-426d-bc9b-9a10e8b79a09\") " pod="calico-system/calico-node-dgb58" Apr 13 19:20:44.830794 kubelet[2589]: I0413 19:20:44.830479 2589 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mgwpc\" (UniqueName: \"kubernetes.io/projected/c80d8608-9891-426d-bc9b-9a10e8b79a09-kube-api-access-mgwpc\") pod \"calico-node-dgb58\" (UID: \"c80d8608-9891-426d-bc9b-9a10e8b79a09\") " pod="calico-system/calico-node-dgb58" Apr 13 19:20:44.830794 kubelet[2589]: I0413 19:20:44.830496 2589 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/c80d8608-9891-426d-bc9b-9a10e8b79a09-cni-log-dir\") pod \"calico-node-dgb58\" (UID: \"c80d8608-9891-426d-bc9b-9a10e8b79a09\") " pod="calico-system/calico-node-dgb58" Apr 13 19:20:44.830794 kubelet[2589]: I0413 19:20:44.830614 2589 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c80d8608-9891-426d-bc9b-9a10e8b79a09-lib-modules\") pod \"calico-node-dgb58\" (UID: \"c80d8608-9891-426d-bc9b-9a10e8b79a09\") " pod="calico-system/calico-node-dgb58" Apr 13 19:20:44.830794 kubelet[2589]: I0413 19:20:44.830689 2589 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/c80d8608-9891-426d-bc9b-9a10e8b79a09-sys-fs\") pod \"calico-node-dgb58\" (UID: \"c80d8608-9891-426d-bc9b-9a10e8b79a09\") " pod="calico-system/calico-node-dgb58" Apr 13 19:20:44.830911 kubelet[2589]: I0413 19:20:44.830762 2589 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/c80d8608-9891-426d-bc9b-9a10e8b79a09-bpffs\") pod \"calico-node-dgb58\" (UID: \"c80d8608-9891-426d-bc9b-9a10e8b79a09\") " pod="calico-system/calico-node-dgb58" Apr 13 19:20:44.830911 kubelet[2589]: I0413 19:20:44.830790 2589 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/c80d8608-9891-426d-bc9b-9a10e8b79a09-node-certs\") pod \"calico-node-dgb58\" (UID: \"c80d8608-9891-426d-bc9b-9a10e8b79a09\") " pod="calico-system/calico-node-dgb58" Apr 13 19:20:44.830911 kubelet[2589]: I0413 19:20:44.830821 2589 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/c80d8608-9891-426d-bc9b-9a10e8b79a09-cni-bin-dir\") pod \"calico-node-dgb58\" (UID: \"c80d8608-9891-426d-bc9b-9a10e8b79a09\") " pod="calico-system/calico-node-dgb58" Apr 13 19:20:44.830911 kubelet[2589]: I0413 19:20:44.830851 2589 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/c80d8608-9891-426d-bc9b-9a10e8b79a09-var-lib-calico\") pod \"calico-node-dgb58\" (UID: \"c80d8608-9891-426d-bc9b-9a10e8b79a09\") " pod="calico-system/calico-node-dgb58" Apr 13 19:20:44.896278 kubelet[2589]: E0413 19:20:44.896102 2589 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-t82fb" podUID="4e1c37d2-8528-4b9f-9fc1-29276f518f72" Apr 13 19:20:44.914340 containerd[1489]: time="2026-04-13T19:20:44.914260668Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7978cf7dbc-78fw2,Uid:6928d7dc-e9f9-47d6-8bc4-8b430b1449cf,Namespace:calico-system,Attempt:0,}" Apr 13 19:20:44.934053 kubelet[2589]: I0413 19:20:44.932301 2589 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/4e1c37d2-8528-4b9f-9fc1-29276f518f72-registration-dir\") pod \"csi-node-driver-t82fb\" (UID: \"4e1c37d2-8528-4b9f-9fc1-29276f518f72\") " pod="calico-system/csi-node-driver-t82fb" Apr 13 19:20:44.934053 kubelet[2589]: I0413 19:20:44.932352 2589 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/4e1c37d2-8528-4b9f-9fc1-29276f518f72-socket-dir\") pod \"csi-node-driver-t82fb\" (UID: \"4e1c37d2-8528-4b9f-9fc1-29276f518f72\") " pod="calico-system/csi-node-driver-t82fb" Apr 13 19:20:44.934053 kubelet[2589]: I0413 19:20:44.932394 2589 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zbq8c\" (UniqueName: \"kubernetes.io/projected/4e1c37d2-8528-4b9f-9fc1-29276f518f72-kube-api-access-zbq8c\") pod \"csi-node-driver-t82fb\" (UID: \"4e1c37d2-8528-4b9f-9fc1-29276f518f72\") " pod="calico-system/csi-node-driver-t82fb" Apr 13 19:20:44.934053 kubelet[2589]: I0413 19:20:44.932460 2589 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/4e1c37d2-8528-4b9f-9fc1-29276f518f72-varrun\") pod \"csi-node-driver-t82fb\" (UID: \"4e1c37d2-8528-4b9f-9fc1-29276f518f72\") " pod="calico-system/csi-node-driver-t82fb" Apr 13 19:20:44.934053 kubelet[2589]: I0413 19:20:44.932504 2589 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4e1c37d2-8528-4b9f-9fc1-29276f518f72-kubelet-dir\") pod \"csi-node-driver-t82fb\" (UID: \"4e1c37d2-8528-4b9f-9fc1-29276f518f72\") " pod="calico-system/csi-node-driver-t82fb" Apr 13 19:20:44.941062 kubelet[2589]: E0413 19:20:44.940486 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:20:44.941332 kubelet[2589]: W0413 19:20:44.941303 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:20:44.941408 kubelet[2589]: E0413 19:20:44.941396 2589 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:20:44.941800 kubelet[2589]: E0413 19:20:44.941784 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:20:44.941967 kubelet[2589]: W0413 19:20:44.941894 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:20:44.942128 kubelet[2589]: E0413 19:20:44.942071 2589 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:20:44.944169 kubelet[2589]: E0413 19:20:44.943305 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:20:44.945678 kubelet[2589]: W0413 19:20:44.945082 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:20:44.945678 kubelet[2589]: E0413 19:20:44.945143 2589 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:20:44.946719 kubelet[2589]: E0413 19:20:44.946193 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:20:44.946719 kubelet[2589]: W0413 19:20:44.946220 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:20:44.946719 kubelet[2589]: E0413 19:20:44.946241 2589 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:20:44.953537 kubelet[2589]: E0413 19:20:44.953182 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:20:44.953537 kubelet[2589]: W0413 19:20:44.953527 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:20:44.953690 kubelet[2589]: E0413 19:20:44.953557 2589 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:20:44.963837 kubelet[2589]: E0413 19:20:44.963747 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:20:44.963837 kubelet[2589]: W0413 19:20:44.963775 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:20:44.963837 kubelet[2589]: E0413 19:20:44.963799 2589 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:20:44.965866 kubelet[2589]: E0413 19:20:44.964563 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:20:44.965866 kubelet[2589]: W0413 19:20:44.964594 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:20:44.965866 kubelet[2589]: E0413 19:20:44.964615 2589 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:20:44.968340 kubelet[2589]: E0413 19:20:44.968297 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:20:44.968340 kubelet[2589]: W0413 19:20:44.968329 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:20:44.968532 kubelet[2589]: E0413 19:20:44.968354 2589 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:20:44.969317 kubelet[2589]: E0413 19:20:44.969094 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:20:44.969317 kubelet[2589]: W0413 19:20:44.969311 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:20:44.969525 kubelet[2589]: E0413 19:20:44.969337 2589 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:20:44.974285 kubelet[2589]: E0413 19:20:44.974242 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:20:44.974285 kubelet[2589]: W0413 19:20:44.974274 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:20:44.974911 kubelet[2589]: E0413 19:20:44.974302 2589 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:20:44.977805 kubelet[2589]: E0413 19:20:44.977757 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:20:44.977805 kubelet[2589]: W0413 19:20:44.977791 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:20:44.977805 kubelet[2589]: E0413 19:20:44.977820 2589 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:20:44.978694 kubelet[2589]: E0413 19:20:44.978413 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:20:44.978694 kubelet[2589]: W0413 19:20:44.978632 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:20:44.978694 kubelet[2589]: E0413 19:20:44.978655 2589 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:20:44.979190 kubelet[2589]: E0413 19:20:44.979162 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:20:44.979190 kubelet[2589]: W0413 19:20:44.979188 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:20:44.979469 kubelet[2589]: E0413 19:20:44.979362 2589 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:20:44.980347 kubelet[2589]: E0413 19:20:44.980304 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:20:44.980774 kubelet[2589]: W0413 19:20:44.980731 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:20:44.980967 kubelet[2589]: E0413 19:20:44.980779 2589 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:20:44.983085 kubelet[2589]: E0413 19:20:44.982950 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:20:44.983085 kubelet[2589]: W0413 19:20:44.982978 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:20:44.983085 kubelet[2589]: E0413 19:20:44.983000 2589 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:20:44.984269 kubelet[2589]: E0413 19:20:44.984185 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:20:44.984269 kubelet[2589]: W0413 19:20:44.984216 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:20:44.984269 kubelet[2589]: E0413 19:20:44.984238 2589 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:20:44.985607 kubelet[2589]: E0413 19:20:44.985571 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:20:44.985607 kubelet[2589]: W0413 19:20:44.985600 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:20:44.985709 kubelet[2589]: E0413 19:20:44.985622 2589 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:20:44.987330 kubelet[2589]: E0413 19:20:44.987284 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:20:44.987454 kubelet[2589]: W0413 19:20:44.987314 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:20:44.987454 kubelet[2589]: E0413 19:20:44.987435 2589 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:20:44.988436 kubelet[2589]: E0413 19:20:44.988403 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:20:44.988436 kubelet[2589]: W0413 19:20:44.988426 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:20:44.988717 kubelet[2589]: E0413 19:20:44.988482 2589 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:20:44.991048 kubelet[2589]: E0413 19:20:44.989521 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:20:44.991048 kubelet[2589]: W0413 19:20:44.989551 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:20:44.991048 kubelet[2589]: E0413 19:20:44.989570 2589 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:20:44.991048 kubelet[2589]: E0413 19:20:44.990538 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:20:44.991376 kubelet[2589]: W0413 19:20:44.991118 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:20:44.991376 kubelet[2589]: E0413 19:20:44.991209 2589 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:20:44.991910 kubelet[2589]: E0413 19:20:44.991885 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:20:44.991910 kubelet[2589]: W0413 19:20:44.991902 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:20:44.992031 kubelet[2589]: E0413 19:20:44.991916 2589 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:20:44.993394 kubelet[2589]: E0413 19:20:44.993366 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:20:44.993394 kubelet[2589]: W0413 19:20:44.993387 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:20:44.993515 kubelet[2589]: E0413 19:20:44.993405 2589 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:20:44.993555 containerd[1489]: time="2026-04-13T19:20:44.992355290Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 19:20:44.994344 kubelet[2589]: E0413 19:20:44.994308 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:20:44.994344 kubelet[2589]: W0413 19:20:44.994334 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:20:44.994344 kubelet[2589]: E0413 19:20:44.994350 2589 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:20:44.995925 containerd[1489]: time="2026-04-13T19:20:44.993837495Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 19:20:44.995925 containerd[1489]: time="2026-04-13T19:20:44.993866708Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:20:44.995925 containerd[1489]: time="2026-04-13T19:20:44.995716803Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:20:44.996495 kubelet[2589]: E0413 19:20:44.996467 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:20:44.996495 kubelet[2589]: W0413 19:20:44.996491 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:20:44.996600 kubelet[2589]: E0413 19:20:44.996512 2589 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:20:44.997946 kubelet[2589]: E0413 19:20:44.997895 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:20:44.997946 kubelet[2589]: W0413 19:20:44.997922 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:20:44.997946 kubelet[2589]: E0413 19:20:44.997944 2589 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:20:44.998631 kubelet[2589]: E0413 19:20:44.998603 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:20:44.998631 kubelet[2589]: W0413 19:20:44.998625 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:20:44.998741 kubelet[2589]: E0413 19:20:44.998644 2589 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:20:45.006074 kubelet[2589]: E0413 19:20:45.002545 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:20:45.006074 kubelet[2589]: W0413 19:20:45.002586 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:20:45.006074 kubelet[2589]: E0413 19:20:45.002611 2589 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:20:45.007533 kubelet[2589]: E0413 19:20:45.007349 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:20:45.007533 kubelet[2589]: W0413 19:20:45.007375 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:20:45.007533 kubelet[2589]: E0413 19:20:45.007399 2589 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:20:45.010343 kubelet[2589]: E0413 19:20:45.009803 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:20:45.010343 kubelet[2589]: W0413 19:20:45.009841 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:20:45.010343 kubelet[2589]: E0413 19:20:45.009865 2589 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:20:45.010343 kubelet[2589]: E0413 19:20:45.010292 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:20:45.010343 kubelet[2589]: W0413 19:20:45.010312 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:20:45.010343 kubelet[2589]: E0413 19:20:45.010328 2589 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:20:45.013144 kubelet[2589]: E0413 19:20:45.013086 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:20:45.013144 kubelet[2589]: W0413 19:20:45.013119 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:20:45.014058 kubelet[2589]: E0413 19:20:45.013389 2589 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:20:45.034430 kubelet[2589]: E0413 19:20:45.034308 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:20:45.034430 kubelet[2589]: W0413 19:20:45.034335 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:20:45.034430 kubelet[2589]: E0413 19:20:45.034365 2589 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:20:45.036931 systemd[1]: Started cri-containerd-53689234b51b385fab436f4f3c9a0eb85e65ad4e7ed128552f97e26ba400ed4e.scope - libcontainer container 53689234b51b385fab436f4f3c9a0eb85e65ad4e7ed128552f97e26ba400ed4e. Apr 13 19:20:45.037977 kubelet[2589]: E0413 19:20:45.037242 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:20:45.037977 kubelet[2589]: W0413 19:20:45.037268 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:20:45.037977 kubelet[2589]: E0413 19:20:45.037288 2589 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:20:45.040614 kubelet[2589]: E0413 19:20:45.040567 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:20:45.040614 kubelet[2589]: W0413 19:20:45.040598 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:20:45.040800 kubelet[2589]: E0413 19:20:45.040630 2589 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:20:45.041761 kubelet[2589]: E0413 19:20:45.041662 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:20:45.041761 kubelet[2589]: W0413 19:20:45.041713 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:20:45.041761 kubelet[2589]: E0413 19:20:45.041742 2589 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:20:45.043041 kubelet[2589]: E0413 19:20:45.042258 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:20:45.043041 kubelet[2589]: W0413 19:20:45.042278 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:20:45.043041 kubelet[2589]: E0413 19:20:45.042315 2589 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:20:45.043041 kubelet[2589]: E0413 19:20:45.042630 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:20:45.043041 kubelet[2589]: W0413 19:20:45.042666 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:20:45.043041 kubelet[2589]: E0413 19:20:45.042678 2589 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:20:45.043041 kubelet[2589]: E0413 19:20:45.042936 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:20:45.043041 kubelet[2589]: W0413 19:20:45.042947 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:20:45.043041 kubelet[2589]: E0413 19:20:45.042956 2589 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:20:45.043346 kubelet[2589]: E0413 19:20:45.043298 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:20:45.043346 kubelet[2589]: W0413 19:20:45.043310 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:20:45.043346 kubelet[2589]: E0413 19:20:45.043319 2589 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:20:45.043737 kubelet[2589]: E0413 19:20:45.043707 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:20:45.043737 kubelet[2589]: W0413 19:20:45.043726 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:20:45.043737 kubelet[2589]: E0413 19:20:45.043737 2589 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:20:45.045003 kubelet[2589]: E0413 19:20:45.043961 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:20:45.045003 kubelet[2589]: W0413 19:20:45.043975 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:20:45.045003 kubelet[2589]: E0413 19:20:45.043987 2589 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:20:45.045003 kubelet[2589]: E0413 19:20:45.044230 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:20:45.045003 kubelet[2589]: W0413 19:20:45.044256 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:20:45.045003 kubelet[2589]: E0413 19:20:45.044266 2589 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:20:45.045003 kubelet[2589]: E0413 19:20:45.044531 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:20:45.045003 kubelet[2589]: W0413 19:20:45.044541 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:20:45.045003 kubelet[2589]: E0413 19:20:45.044577 2589 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:20:45.045003 kubelet[2589]: E0413 19:20:45.044812 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:20:45.045355 kubelet[2589]: W0413 19:20:45.044823 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:20:45.045355 kubelet[2589]: E0413 19:20:45.044852 2589 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:20:45.045668 kubelet[2589]: E0413 19:20:45.045645 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:20:45.045668 kubelet[2589]: W0413 19:20:45.045663 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:20:45.045738 kubelet[2589]: E0413 19:20:45.045690 2589 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:20:45.045974 kubelet[2589]: E0413 19:20:45.045953 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:20:45.046006 kubelet[2589]: W0413 19:20:45.045969 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:20:45.046006 kubelet[2589]: E0413 19:20:45.045992 2589 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:20:45.046602 kubelet[2589]: E0413 19:20:45.046575 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:20:45.046602 kubelet[2589]: W0413 19:20:45.046595 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:20:45.046665 kubelet[2589]: E0413 19:20:45.046607 2589 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:20:45.047346 kubelet[2589]: E0413 19:20:45.047313 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:20:45.047346 kubelet[2589]: W0413 19:20:45.047333 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:20:45.047346 kubelet[2589]: E0413 19:20:45.047347 2589 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:20:45.047595 kubelet[2589]: E0413 19:20:45.047572 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:20:45.047595 kubelet[2589]: W0413 19:20:45.047587 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:20:45.047595 kubelet[2589]: E0413 19:20:45.047595 2589 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:20:45.048915 kubelet[2589]: E0413 19:20:45.048873 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:20:45.048915 kubelet[2589]: W0413 19:20:45.048898 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:20:45.049056 kubelet[2589]: E0413 19:20:45.048938 2589 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:20:45.051060 kubelet[2589]: E0413 19:20:45.049577 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:20:45.051060 kubelet[2589]: W0413 19:20:45.049601 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:20:45.051060 kubelet[2589]: E0413 19:20:45.049615 2589 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:20:45.051060 kubelet[2589]: E0413 19:20:45.049844 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:20:45.051060 kubelet[2589]: W0413 19:20:45.049852 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:20:45.051060 kubelet[2589]: E0413 19:20:45.049861 2589 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:20:45.051378 kubelet[2589]: E0413 19:20:45.051266 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:20:45.051406 kubelet[2589]: W0413 19:20:45.051388 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:20:45.051785 kubelet[2589]: E0413 19:20:45.051407 2589 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:20:45.051785 kubelet[2589]: E0413 19:20:45.051692 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:20:45.051785 kubelet[2589]: W0413 19:20:45.051703 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:20:45.051785 kubelet[2589]: E0413 19:20:45.051712 2589 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:20:45.051946 kubelet[2589]: E0413 19:20:45.051922 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:20:45.051946 kubelet[2589]: W0413 19:20:45.051936 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:20:45.051946 kubelet[2589]: E0413 19:20:45.051944 2589 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:20:45.054197 kubelet[2589]: E0413 19:20:45.054161 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:20:45.056312 kubelet[2589]: W0413 19:20:45.054327 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:20:45.056312 kubelet[2589]: E0413 19:20:45.054355 2589 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:20:45.059073 kubelet[2589]: E0413 19:20:45.057985 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:20:45.059073 kubelet[2589]: W0413 19:20:45.058103 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:20:45.059073 kubelet[2589]: E0413 19:20:45.058129 2589 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:20:45.070056 kubelet[2589]: E0413 19:20:45.069908 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:20:45.070056 kubelet[2589]: W0413 19:20:45.069941 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:20:45.070056 kubelet[2589]: E0413 19:20:45.069967 2589 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:20:45.092325 containerd[1489]: time="2026-04-13T19:20:45.092271698Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7978cf7dbc-78fw2,Uid:6928d7dc-e9f9-47d6-8bc4-8b430b1449cf,Namespace:calico-system,Attempt:0,} returns sandbox id \"53689234b51b385fab436f4f3c9a0eb85e65ad4e7ed128552f97e26ba400ed4e\"" Apr 13 19:20:45.096767 containerd[1489]: time="2026-04-13T19:20:45.096489673Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\"" Apr 13 19:20:45.114316 containerd[1489]: time="2026-04-13T19:20:45.113995731Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-dgb58,Uid:c80d8608-9891-426d-bc9b-9a10e8b79a09,Namespace:calico-system,Attempt:0,}" Apr 13 19:20:45.145060 containerd[1489]: time="2026-04-13T19:20:45.144918648Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 19:20:45.145060 containerd[1489]: time="2026-04-13T19:20:45.144982476Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 19:20:45.145060 containerd[1489]: time="2026-04-13T19:20:45.144994842Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:20:45.145485 containerd[1489]: time="2026-04-13T19:20:45.145114414Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:20:45.168305 systemd[1]: Started cri-containerd-3de2203ec57f9f8c1123a389080fed13bc2b8492d498777785ab6d99987db515.scope - libcontainer container 3de2203ec57f9f8c1123a389080fed13bc2b8492d498777785ab6d99987db515. Apr 13 19:20:45.206129 containerd[1489]: time="2026-04-13T19:20:45.206055492Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-dgb58,Uid:c80d8608-9891-426d-bc9b-9a10e8b79a09,Namespace:calico-system,Attempt:0,} returns sandbox id \"3de2203ec57f9f8c1123a389080fed13bc2b8492d498777785ab6d99987db515\"" Apr 13 19:20:46.591078 kubelet[2589]: E0413 19:20:46.590452 2589 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-t82fb" podUID="4e1c37d2-8528-4b9f-9fc1-29276f518f72" Apr 13 19:20:46.911650 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4249649224.mount: Deactivated successfully. Apr 13 19:20:47.958515 containerd[1489]: time="2026-04-13T19:20:47.958428813Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:20:47.960251 containerd[1489]: time="2026-04-13T19:20:47.960168868Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.31.4: active requests=0, bytes read=33865174" Apr 13 19:20:47.964994 containerd[1489]: time="2026-04-13T19:20:47.961900160Z" level=info msg="ImageCreate event name:\"sha256:e836e1dea560d4c477b347f1c93c245aec618361306b23eda1d6bb7665476182\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:20:47.968051 containerd[1489]: time="2026-04-13T19:20:47.967972547Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:20:47.969453 containerd[1489]: time="2026-04-13T19:20:47.969401758Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.31.4\" with image id \"sha256:e836e1dea560d4c477b347f1c93c245aec618361306b23eda1d6bb7665476182\", repo tag \"ghcr.io/flatcar/calico/typha:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\", size \"33865028\" in 2.872857302s" Apr 13 19:20:47.969647 containerd[1489]: time="2026-04-13T19:20:47.969623687Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\" returns image reference \"sha256:e836e1dea560d4c477b347f1c93c245aec618361306b23eda1d6bb7665476182\"" Apr 13 19:20:47.971274 containerd[1489]: time="2026-04-13T19:20:47.971227688Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\"" Apr 13 19:20:47.990811 containerd[1489]: time="2026-04-13T19:20:47.990761175Z" level=info msg="CreateContainer within sandbox \"53689234b51b385fab436f4f3c9a0eb85e65ad4e7ed128552f97e26ba400ed4e\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Apr 13 19:20:48.020228 containerd[1489]: time="2026-04-13T19:20:48.020167180Z" level=info msg="CreateContainer within sandbox \"53689234b51b385fab436f4f3c9a0eb85e65ad4e7ed128552f97e26ba400ed4e\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"3ae119ea7d59895fc898c7eeba8a5567cdeb813a90b6214d43111686a7754165\"" Apr 13 19:20:48.022831 containerd[1489]: time="2026-04-13T19:20:48.022777977Z" level=info msg="StartContainer for \"3ae119ea7d59895fc898c7eeba8a5567cdeb813a90b6214d43111686a7754165\"" Apr 13 19:20:48.058582 systemd[1]: Started cri-containerd-3ae119ea7d59895fc898c7eeba8a5567cdeb813a90b6214d43111686a7754165.scope - libcontainer container 3ae119ea7d59895fc898c7eeba8a5567cdeb813a90b6214d43111686a7754165. Apr 13 19:20:48.101632 containerd[1489]: time="2026-04-13T19:20:48.101457434Z" level=info msg="StartContainer for \"3ae119ea7d59895fc898c7eeba8a5567cdeb813a90b6214d43111686a7754165\" returns successfully" Apr 13 19:20:48.589546 kubelet[2589]: E0413 19:20:48.589474 2589 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-t82fb" podUID="4e1c37d2-8528-4b9f-9fc1-29276f518f72" Apr 13 19:20:48.748663 kubelet[2589]: E0413 19:20:48.748545 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:20:48.750321 kubelet[2589]: W0413 19:20:48.748572 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:20:48.751881 kubelet[2589]: E0413 19:20:48.751008 2589 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:20:48.752545 kubelet[2589]: E0413 19:20:48.752369 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:20:48.752545 kubelet[2589]: W0413 19:20:48.752389 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:20:48.752545 kubelet[2589]: E0413 19:20:48.752443 2589 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:20:48.753283 kubelet[2589]: E0413 19:20:48.753136 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:20:48.753283 kubelet[2589]: W0413 19:20:48.753165 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:20:48.753283 kubelet[2589]: E0413 19:20:48.753179 2589 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:20:48.753560 kubelet[2589]: E0413 19:20:48.753525 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:20:48.753560 kubelet[2589]: W0413 19:20:48.753539 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:20:48.753783 kubelet[2589]: E0413 19:20:48.753671 2589 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:20:48.754221 kubelet[2589]: E0413 19:20:48.754159 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:20:48.754221 kubelet[2589]: W0413 19:20:48.754174 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:20:48.754221 kubelet[2589]: E0413 19:20:48.754186 2589 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:20:48.754716 kubelet[2589]: E0413 19:20:48.754622 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:20:48.754716 kubelet[2589]: W0413 19:20:48.754669 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:20:48.754716 kubelet[2589]: E0413 19:20:48.754682 2589 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:20:48.755163 kubelet[2589]: E0413 19:20:48.755087 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:20:48.755163 kubelet[2589]: W0413 19:20:48.755099 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:20:48.755163 kubelet[2589]: E0413 19:20:48.755116 2589 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:20:48.755548 kubelet[2589]: E0413 19:20:48.755436 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:20:48.755548 kubelet[2589]: W0413 19:20:48.755448 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:20:48.755548 kubelet[2589]: E0413 19:20:48.755457 2589 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:20:48.755811 kubelet[2589]: E0413 19:20:48.755738 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:20:48.755811 kubelet[2589]: W0413 19:20:48.755749 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:20:48.755906 kubelet[2589]: E0413 19:20:48.755758 2589 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:20:48.756195 kubelet[2589]: E0413 19:20:48.756183 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:20:48.756281 kubelet[2589]: W0413 19:20:48.756270 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:20:48.756381 kubelet[2589]: E0413 19:20:48.756342 2589 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:20:48.756664 kubelet[2589]: E0413 19:20:48.756592 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:20:48.756664 kubelet[2589]: W0413 19:20:48.756604 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:20:48.756664 kubelet[2589]: E0413 19:20:48.756615 2589 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:20:48.757179 kubelet[2589]: E0413 19:20:48.757034 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:20:48.757179 kubelet[2589]: W0413 19:20:48.757052 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:20:48.757179 kubelet[2589]: E0413 19:20:48.757073 2589 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:20:48.757451 kubelet[2589]: E0413 19:20:48.757390 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:20:48.757451 kubelet[2589]: W0413 19:20:48.757402 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:20:48.757451 kubelet[2589]: E0413 19:20:48.757412 2589 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:20:48.757908 kubelet[2589]: E0413 19:20:48.757794 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:20:48.757908 kubelet[2589]: W0413 19:20:48.757811 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:20:48.757908 kubelet[2589]: E0413 19:20:48.757821 2589 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:20:48.758203 kubelet[2589]: E0413 19:20:48.758133 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:20:48.758391 kubelet[2589]: W0413 19:20:48.758288 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:20:48.758391 kubelet[2589]: E0413 19:20:48.758306 2589 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:20:48.772002 kubelet[2589]: E0413 19:20:48.771966 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:20:48.772002 kubelet[2589]: W0413 19:20:48.772074 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:20:48.772002 kubelet[2589]: E0413 19:20:48.772103 2589 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:20:48.773073 kubelet[2589]: E0413 19:20:48.772940 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:20:48.773073 kubelet[2589]: W0413 19:20:48.772963 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:20:48.773073 kubelet[2589]: E0413 19:20:48.772983 2589 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:20:48.773389 kubelet[2589]: E0413 19:20:48.773352 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:20:48.773389 kubelet[2589]: W0413 19:20:48.773371 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:20:48.773489 kubelet[2589]: E0413 19:20:48.773394 2589 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:20:48.773886 kubelet[2589]: E0413 19:20:48.773845 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:20:48.773886 kubelet[2589]: W0413 19:20:48.773869 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:20:48.773886 kubelet[2589]: E0413 19:20:48.773888 2589 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:20:48.774246 kubelet[2589]: E0413 19:20:48.774228 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:20:48.774246 kubelet[2589]: W0413 19:20:48.774245 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:20:48.774327 kubelet[2589]: E0413 19:20:48.774257 2589 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:20:48.774558 kubelet[2589]: E0413 19:20:48.774540 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:20:48.774558 kubelet[2589]: W0413 19:20:48.774554 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:20:48.774647 kubelet[2589]: E0413 19:20:48.774565 2589 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:20:48.774957 kubelet[2589]: E0413 19:20:48.774940 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:20:48.774957 kubelet[2589]: W0413 19:20:48.774956 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:20:48.775086 kubelet[2589]: E0413 19:20:48.774969 2589 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:20:48.775304 kubelet[2589]: E0413 19:20:48.775289 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:20:48.775304 kubelet[2589]: W0413 19:20:48.775303 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:20:48.775370 kubelet[2589]: E0413 19:20:48.775314 2589 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:20:48.775600 kubelet[2589]: E0413 19:20:48.775585 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:20:48.775600 kubelet[2589]: W0413 19:20:48.775597 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:20:48.775694 kubelet[2589]: E0413 19:20:48.775606 2589 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:20:48.776202 kubelet[2589]: E0413 19:20:48.776181 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:20:48.776202 kubelet[2589]: W0413 19:20:48.776198 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:20:48.776304 kubelet[2589]: E0413 19:20:48.776215 2589 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:20:48.776516 kubelet[2589]: E0413 19:20:48.776502 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:20:48.776516 kubelet[2589]: W0413 19:20:48.776516 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:20:48.776579 kubelet[2589]: E0413 19:20:48.776527 2589 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:20:48.776851 kubelet[2589]: E0413 19:20:48.776795 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:20:48.776851 kubelet[2589]: W0413 19:20:48.776818 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:20:48.776851 kubelet[2589]: E0413 19:20:48.776834 2589 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:20:48.777281 kubelet[2589]: E0413 19:20:48.777262 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:20:48.777281 kubelet[2589]: W0413 19:20:48.777280 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:20:48.777366 kubelet[2589]: E0413 19:20:48.777292 2589 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:20:48.777534 kubelet[2589]: E0413 19:20:48.777520 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:20:48.777534 kubelet[2589]: W0413 19:20:48.777533 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:20:48.777604 kubelet[2589]: E0413 19:20:48.777542 2589 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:20:48.777845 kubelet[2589]: E0413 19:20:48.777828 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:20:48.777845 kubelet[2589]: W0413 19:20:48.777845 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:20:48.777935 kubelet[2589]: E0413 19:20:48.777855 2589 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:20:48.778275 kubelet[2589]: E0413 19:20:48.778252 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:20:48.778275 kubelet[2589]: W0413 19:20:48.778275 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:20:48.778415 kubelet[2589]: E0413 19:20:48.778294 2589 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:20:48.779130 kubelet[2589]: E0413 19:20:48.779108 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:20:48.779130 kubelet[2589]: W0413 19:20:48.779127 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:20:48.779249 kubelet[2589]: E0413 19:20:48.779139 2589 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:20:48.779418 kubelet[2589]: E0413 19:20:48.779402 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:20:48.779418 kubelet[2589]: W0413 19:20:48.779416 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:20:48.779489 kubelet[2589]: E0413 19:20:48.779425 2589 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:20:49.736377 kubelet[2589]: I0413 19:20:49.736195 2589 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 13 19:20:49.765213 kubelet[2589]: E0413 19:20:49.764918 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:20:49.765213 kubelet[2589]: W0413 19:20:49.764943 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:20:49.765213 kubelet[2589]: E0413 19:20:49.764964 2589 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:20:49.765556 kubelet[2589]: E0413 19:20:49.765382 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:20:49.765556 kubelet[2589]: W0413 19:20:49.765394 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:20:49.765556 kubelet[2589]: E0413 19:20:49.765406 2589 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:20:49.766345 kubelet[2589]: E0413 19:20:49.766007 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:20:49.766345 kubelet[2589]: W0413 19:20:49.766077 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:20:49.766345 kubelet[2589]: E0413 19:20:49.766103 2589 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:20:49.766787 kubelet[2589]: E0413 19:20:49.766539 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:20:49.766787 kubelet[2589]: W0413 19:20:49.766551 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:20:49.766787 kubelet[2589]: E0413 19:20:49.766563 2589 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:20:49.767150 kubelet[2589]: E0413 19:20:49.766963 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:20:49.767150 kubelet[2589]: W0413 19:20:49.766998 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:20:49.767150 kubelet[2589]: E0413 19:20:49.767028 2589 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:20:49.767430 kubelet[2589]: E0413 19:20:49.767373 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:20:49.767430 kubelet[2589]: W0413 19:20:49.767385 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:20:49.767430 kubelet[2589]: E0413 19:20:49.767395 2589 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:20:49.767806 kubelet[2589]: E0413 19:20:49.767693 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:20:49.767806 kubelet[2589]: W0413 19:20:49.767707 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:20:49.767806 kubelet[2589]: E0413 19:20:49.767717 2589 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:20:49.768298 kubelet[2589]: E0413 19:20:49.768170 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:20:49.768298 kubelet[2589]: W0413 19:20:49.768182 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:20:49.768298 kubelet[2589]: E0413 19:20:49.768193 2589 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:20:49.768531 kubelet[2589]: E0413 19:20:49.768480 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:20:49.768531 kubelet[2589]: W0413 19:20:49.768490 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:20:49.768531 kubelet[2589]: E0413 19:20:49.768500 2589 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:20:49.768935 kubelet[2589]: E0413 19:20:49.768774 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:20:49.768935 kubelet[2589]: W0413 19:20:49.768784 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:20:49.768935 kubelet[2589]: E0413 19:20:49.768797 2589 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:20:49.769240 kubelet[2589]: E0413 19:20:49.769173 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:20:49.769240 kubelet[2589]: W0413 19:20:49.769185 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:20:49.769240 kubelet[2589]: E0413 19:20:49.769196 2589 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:20:49.769752 kubelet[2589]: E0413 19:20:49.769634 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:20:49.769752 kubelet[2589]: W0413 19:20:49.769647 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:20:49.769752 kubelet[2589]: E0413 19:20:49.769657 2589 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:20:49.769936 kubelet[2589]: E0413 19:20:49.769926 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:20:49.770091 kubelet[2589]: W0413 19:20:49.769984 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:20:49.770091 kubelet[2589]: E0413 19:20:49.769997 2589 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:20:49.770490 kubelet[2589]: E0413 19:20:49.770327 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:20:49.770490 kubelet[2589]: W0413 19:20:49.770426 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:20:49.770490 kubelet[2589]: E0413 19:20:49.770441 2589 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:20:49.770834 kubelet[2589]: E0413 19:20:49.770745 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:20:49.770834 kubelet[2589]: W0413 19:20:49.770757 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:20:49.770834 kubelet[2589]: E0413 19:20:49.770767 2589 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:20:49.771251 containerd[1489]: time="2026-04-13T19:20:49.771206066Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:20:49.772412 containerd[1489]: time="2026-04-13T19:20:49.772255168Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4: active requests=0, bytes read=4457682" Apr 13 19:20:49.774835 containerd[1489]: time="2026-04-13T19:20:49.773506304Z" level=info msg="ImageCreate event name:\"sha256:449a6463eaa02e13b190ef7c4057191febcc65ab9418bae3bc0995f5bce65798\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:20:49.777580 containerd[1489]: time="2026-04-13T19:20:49.776511279Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:20:49.777580 containerd[1489]: time="2026-04-13T19:20:49.777415129Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" with image id \"sha256:449a6463eaa02e13b190ef7c4057191febcc65ab9418bae3bc0995f5bce65798\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\", size \"5855167\" in 1.805892403s" Apr 13 19:20:49.777580 containerd[1489]: time="2026-04-13T19:20:49.777455263Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" returns image reference \"sha256:449a6463eaa02e13b190ef7c4057191febcc65ab9418bae3bc0995f5bce65798\"" Apr 13 19:20:49.783378 kubelet[2589]: E0413 19:20:49.783343 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:20:49.783378 kubelet[2589]: W0413 19:20:49.783374 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:20:49.783544 kubelet[2589]: E0413 19:20:49.783401 2589 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:20:49.784808 kubelet[2589]: E0413 19:20:49.784776 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:20:49.785438 kubelet[2589]: W0413 19:20:49.784801 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:20:49.785438 kubelet[2589]: E0413 19:20:49.784862 2589 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:20:49.786311 kubelet[2589]: E0413 19:20:49.786173 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:20:49.786311 kubelet[2589]: W0413 19:20:49.786207 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:20:49.786311 kubelet[2589]: E0413 19:20:49.786257 2589 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:20:49.787617 kubelet[2589]: E0413 19:20:49.787482 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:20:49.787617 kubelet[2589]: W0413 19:20:49.787513 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:20:49.787617 kubelet[2589]: E0413 19:20:49.787539 2589 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:20:49.790098 kubelet[2589]: E0413 19:20:49.789392 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:20:49.790098 kubelet[2589]: W0413 19:20:49.789415 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:20:49.790098 kubelet[2589]: E0413 19:20:49.789434 2589 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:20:49.790098 kubelet[2589]: E0413 19:20:49.789742 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:20:49.790098 kubelet[2589]: W0413 19:20:49.789782 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:20:49.790098 kubelet[2589]: E0413 19:20:49.789795 2589 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:20:49.790367 containerd[1489]: time="2026-04-13T19:20:49.790118879Z" level=info msg="CreateContainer within sandbox \"3de2203ec57f9f8c1123a389080fed13bc2b8492d498777785ab6d99987db515\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Apr 13 19:20:49.790847 kubelet[2589]: E0413 19:20:49.790727 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:20:49.790847 kubelet[2589]: W0413 19:20:49.790743 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:20:49.790847 kubelet[2589]: E0413 19:20:49.790757 2589 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:20:49.791425 kubelet[2589]: E0413 19:20:49.791284 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:20:49.791425 kubelet[2589]: W0413 19:20:49.791301 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:20:49.791425 kubelet[2589]: E0413 19:20:49.791320 2589 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:20:49.792342 kubelet[2589]: E0413 19:20:49.792211 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:20:49.792342 kubelet[2589]: W0413 19:20:49.792231 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:20:49.792342 kubelet[2589]: E0413 19:20:49.792245 2589 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:20:49.792650 kubelet[2589]: E0413 19:20:49.792438 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:20:49.792650 kubelet[2589]: W0413 19:20:49.792446 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:20:49.792650 kubelet[2589]: E0413 19:20:49.792456 2589 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:20:49.793220 kubelet[2589]: E0413 19:20:49.792932 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:20:49.793220 kubelet[2589]: W0413 19:20:49.792948 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:20:49.793220 kubelet[2589]: E0413 19:20:49.792960 2589 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:20:49.793724 kubelet[2589]: E0413 19:20:49.793543 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:20:49.793724 kubelet[2589]: W0413 19:20:49.793559 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:20:49.793724 kubelet[2589]: E0413 19:20:49.793570 2589 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:20:49.795353 kubelet[2589]: E0413 19:20:49.794850 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:20:49.795353 kubelet[2589]: W0413 19:20:49.794868 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:20:49.795353 kubelet[2589]: E0413 19:20:49.794881 2589 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:20:49.795353 kubelet[2589]: E0413 19:20:49.795227 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:20:49.795353 kubelet[2589]: W0413 19:20:49.795237 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:20:49.795353 kubelet[2589]: E0413 19:20:49.795248 2589 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:20:49.796183 kubelet[2589]: E0413 19:20:49.796159 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:20:49.796183 kubelet[2589]: W0413 19:20:49.796177 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:20:49.796275 kubelet[2589]: E0413 19:20:49.796193 2589 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:20:49.798269 kubelet[2589]: E0413 19:20:49.798142 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:20:49.798269 kubelet[2589]: W0413 19:20:49.798266 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:20:49.798397 kubelet[2589]: E0413 19:20:49.798292 2589 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:20:49.799164 kubelet[2589]: E0413 19:20:49.799003 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:20:49.799164 kubelet[2589]: W0413 19:20:49.799164 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:20:49.799268 kubelet[2589]: E0413 19:20:49.799184 2589 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:20:49.801839 kubelet[2589]: E0413 19:20:49.801680 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:20:49.804407 kubelet[2589]: W0413 19:20:49.804366 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:20:49.804786 kubelet[2589]: E0413 19:20:49.804522 2589 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:20:49.818063 containerd[1489]: time="2026-04-13T19:20:49.817960907Z" level=info msg="CreateContainer within sandbox \"3de2203ec57f9f8c1123a389080fed13bc2b8492d498777785ab6d99987db515\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"0d0cca183043fae07fd0c1f3c25c03067bf2d965095fabcccbac8ee4eb98ef46\"" Apr 13 19:20:49.820168 containerd[1489]: time="2026-04-13T19:20:49.819952673Z" level=info msg="StartContainer for \"0d0cca183043fae07fd0c1f3c25c03067bf2d965095fabcccbac8ee4eb98ef46\"" Apr 13 19:20:49.858303 systemd[1]: Started cri-containerd-0d0cca183043fae07fd0c1f3c25c03067bf2d965095fabcccbac8ee4eb98ef46.scope - libcontainer container 0d0cca183043fae07fd0c1f3c25c03067bf2d965095fabcccbac8ee4eb98ef46. Apr 13 19:20:49.890539 containerd[1489]: time="2026-04-13T19:20:49.890398349Z" level=info msg="StartContainer for \"0d0cca183043fae07fd0c1f3c25c03067bf2d965095fabcccbac8ee4eb98ef46\" returns successfully" Apr 13 19:20:49.905962 systemd[1]: cri-containerd-0d0cca183043fae07fd0c1f3c25c03067bf2d965095fabcccbac8ee4eb98ef46.scope: Deactivated successfully. Apr 13 19:20:49.977386 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0d0cca183043fae07fd0c1f3c25c03067bf2d965095fabcccbac8ee4eb98ef46-rootfs.mount: Deactivated successfully. Apr 13 19:20:49.996332 containerd[1489]: time="2026-04-13T19:20:49.995969908Z" level=info msg="shim disconnected" id=0d0cca183043fae07fd0c1f3c25c03067bf2d965095fabcccbac8ee4eb98ef46 namespace=k8s.io Apr 13 19:20:49.996332 containerd[1489]: time="2026-04-13T19:20:49.996059181Z" level=warning msg="cleaning up after shim disconnected" id=0d0cca183043fae07fd0c1f3c25c03067bf2d965095fabcccbac8ee4eb98ef46 namespace=k8s.io Apr 13 19:20:49.996332 containerd[1489]: time="2026-04-13T19:20:49.996069705Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 13 19:20:50.590481 kubelet[2589]: E0413 19:20:50.590382 2589 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-t82fb" podUID="4e1c37d2-8528-4b9f-9fc1-29276f518f72" Apr 13 19:20:50.748898 containerd[1489]: time="2026-04-13T19:20:50.748503019Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\"" Apr 13 19:20:50.785123 kubelet[2589]: I0413 19:20:50.784773 2589 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-7978cf7dbc-78fw2" podStartSLOduration=3.908661592 podStartE2EDuration="6.784756174s" podCreationTimestamp="2026-04-13 19:20:44 +0000 UTC" firstStartedPulling="2026-04-13 19:20:45.094875083 +0000 UTC m=+21.658586249" lastFinishedPulling="2026-04-13 19:20:47.970969665 +0000 UTC m=+24.534680831" observedRunningTime="2026-04-13 19:20:48.751707356 +0000 UTC m=+25.315418522" watchObservedRunningTime="2026-04-13 19:20:50.784756174 +0000 UTC m=+27.348467340" Apr 13 19:20:52.589990 kubelet[2589]: E0413 19:20:52.589411 2589 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-t82fb" podUID="4e1c37d2-8528-4b9f-9fc1-29276f518f72" Apr 13 19:20:54.595403 kubelet[2589]: E0413 19:20:54.589667 2589 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-t82fb" podUID="4e1c37d2-8528-4b9f-9fc1-29276f518f72" Apr 13 19:20:56.589311 kubelet[2589]: E0413 19:20:56.589239 2589 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-t82fb" podUID="4e1c37d2-8528-4b9f-9fc1-29276f518f72" Apr 13 19:20:57.355923 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4200382099.mount: Deactivated successfully. Apr 13 19:20:57.382080 containerd[1489]: time="2026-04-13T19:20:57.381996834Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:20:57.383290 containerd[1489]: time="2026-04-13T19:20:57.383176742Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.31.4: active requests=0, bytes read=153921674" Apr 13 19:20:57.384533 containerd[1489]: time="2026-04-13T19:20:57.384206132Z" level=info msg="ImageCreate event name:\"sha256:27be54f2b9e47d96c7e9e5ad16e26ec298c1829f31885c81a622d50472c8ac97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:20:57.387803 containerd[1489]: time="2026-04-13T19:20:57.387756700Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:20:57.389496 containerd[1489]: time="2026-04-13T19:20:57.389447542Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.31.4\" with image id \"sha256:27be54f2b9e47d96c7e9e5ad16e26ec298c1829f31885c81a622d50472c8ac97\", repo tag \"ghcr.io/flatcar/calico/node:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\", size \"153921536\" in 6.640885264s" Apr 13 19:20:57.389777 containerd[1489]: time="2026-04-13T19:20:57.389659117Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\" returns image reference \"sha256:27be54f2b9e47d96c7e9e5ad16e26ec298c1829f31885c81a622d50472c8ac97\"" Apr 13 19:20:57.396353 containerd[1489]: time="2026-04-13T19:20:57.396175261Z" level=info msg="CreateContainer within sandbox \"3de2203ec57f9f8c1123a389080fed13bc2b8492d498777785ab6d99987db515\" for container &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,}" Apr 13 19:20:57.417408 containerd[1489]: time="2026-04-13T19:20:57.417172030Z" level=info msg="CreateContainer within sandbox \"3de2203ec57f9f8c1123a389080fed13bc2b8492d498777785ab6d99987db515\" for &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,} returns container id \"52ce2ef888cc3cd46ce356742945a51bde846e6c792755d2c9c014d52e4cbb26\"" Apr 13 19:20:57.418300 containerd[1489]: time="2026-04-13T19:20:57.418250552Z" level=info msg="StartContainer for \"52ce2ef888cc3cd46ce356742945a51bde846e6c792755d2c9c014d52e4cbb26\"" Apr 13 19:20:57.459282 systemd[1]: Started cri-containerd-52ce2ef888cc3cd46ce356742945a51bde846e6c792755d2c9c014d52e4cbb26.scope - libcontainer container 52ce2ef888cc3cd46ce356742945a51bde846e6c792755d2c9c014d52e4cbb26. Apr 13 19:20:57.494102 containerd[1489]: time="2026-04-13T19:20:57.493003095Z" level=info msg="StartContainer for \"52ce2ef888cc3cd46ce356742945a51bde846e6c792755d2c9c014d52e4cbb26\" returns successfully" Apr 13 19:20:57.602427 systemd[1]: cri-containerd-52ce2ef888cc3cd46ce356742945a51bde846e6c792755d2c9c014d52e4cbb26.scope: Deactivated successfully. Apr 13 19:20:57.744067 containerd[1489]: time="2026-04-13T19:20:57.743676750Z" level=info msg="shim disconnected" id=52ce2ef888cc3cd46ce356742945a51bde846e6c792755d2c9c014d52e4cbb26 namespace=k8s.io Apr 13 19:20:57.744067 containerd[1489]: time="2026-04-13T19:20:57.743763053Z" level=warning msg="cleaning up after shim disconnected" id=52ce2ef888cc3cd46ce356742945a51bde846e6c792755d2c9c014d52e4cbb26 namespace=k8s.io Apr 13 19:20:57.744067 containerd[1489]: time="2026-04-13T19:20:57.743780697Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 13 19:20:57.766645 containerd[1489]: time="2026-04-13T19:20:57.766059682Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\"" Apr 13 19:20:58.358310 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-52ce2ef888cc3cd46ce356742945a51bde846e6c792755d2c9c014d52e4cbb26-rootfs.mount: Deactivated successfully. Apr 13 19:20:58.590504 kubelet[2589]: E0413 19:20:58.590401 2589 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-t82fb" podUID="4e1c37d2-8528-4b9f-9fc1-29276f518f72" Apr 13 19:21:00.590116 kubelet[2589]: E0413 19:21:00.589903 2589 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-t82fb" podUID="4e1c37d2-8528-4b9f-9fc1-29276f518f72" Apr 13 19:21:01.185832 containerd[1489]: time="2026-04-13T19:21:01.184769162Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:21:01.188166 containerd[1489]: time="2026-04-13T19:21:01.187352108Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.31.4: active requests=0, bytes read=66009216" Apr 13 19:21:01.189795 containerd[1489]: time="2026-04-13T19:21:01.189622503Z" level=info msg="ImageCreate event name:\"sha256:c10bed152367fad8c19e9400f12b748d6fbc20498086983df13e70e36f24511b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:21:01.193394 containerd[1489]: time="2026-04-13T19:21:01.193335025Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:21:01.195146 containerd[1489]: time="2026-04-13T19:21:01.195073179Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.31.4\" with image id \"sha256:c10bed152367fad8c19e9400f12b748d6fbc20498086983df13e70e36f24511b\", repo tag \"ghcr.io/flatcar/calico/cni:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\", size \"67406741\" in 3.428967445s" Apr 13 19:21:01.195146 containerd[1489]: time="2026-04-13T19:21:01.195131392Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\" returns image reference \"sha256:c10bed152367fad8c19e9400f12b748d6fbc20498086983df13e70e36f24511b\"" Apr 13 19:21:01.202185 containerd[1489]: time="2026-04-13T19:21:01.202113175Z" level=info msg="CreateContainer within sandbox \"3de2203ec57f9f8c1123a389080fed13bc2b8492d498777785ab6d99987db515\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Apr 13 19:21:01.235656 containerd[1489]: time="2026-04-13T19:21:01.235529392Z" level=info msg="CreateContainer within sandbox \"3de2203ec57f9f8c1123a389080fed13bc2b8492d498777785ab6d99987db515\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"7c10c61519f103ea379c590b9237df399c1866188b03b385c6ac1dbda02e87c9\"" Apr 13 19:21:01.238449 containerd[1489]: time="2026-04-13T19:21:01.238397722Z" level=info msg="StartContainer for \"7c10c61519f103ea379c590b9237df399c1866188b03b385c6ac1dbda02e87c9\"" Apr 13 19:21:01.273806 systemd[1]: run-containerd-runc-k8s.io-7c10c61519f103ea379c590b9237df399c1866188b03b385c6ac1dbda02e87c9-runc.P9owTB.mount: Deactivated successfully. Apr 13 19:21:01.286418 systemd[1]: Started cri-containerd-7c10c61519f103ea379c590b9237df399c1866188b03b385c6ac1dbda02e87c9.scope - libcontainer container 7c10c61519f103ea379c590b9237df399c1866188b03b385c6ac1dbda02e87c9. Apr 13 19:21:01.321868 containerd[1489]: time="2026-04-13T19:21:01.321464837Z" level=info msg="StartContainer for \"7c10c61519f103ea379c590b9237df399c1866188b03b385c6ac1dbda02e87c9\" returns successfully" Apr 13 19:21:01.894930 containerd[1489]: time="2026-04-13T19:21:01.894727218Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 13 19:21:01.896858 systemd[1]: cri-containerd-7c10c61519f103ea379c590b9237df399c1866188b03b385c6ac1dbda02e87c9.scope: Deactivated successfully. Apr 13 19:21:01.922699 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7c10c61519f103ea379c590b9237df399c1866188b03b385c6ac1dbda02e87c9-rootfs.mount: Deactivated successfully. Apr 13 19:21:01.959672 kubelet[2589]: I0413 19:21:01.959477 2589 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Apr 13 19:21:01.979465 containerd[1489]: time="2026-04-13T19:21:01.979342644Z" level=info msg="shim disconnected" id=7c10c61519f103ea379c590b9237df399c1866188b03b385c6ac1dbda02e87c9 namespace=k8s.io Apr 13 19:21:01.979465 containerd[1489]: time="2026-04-13T19:21:01.979453109Z" level=warning msg="cleaning up after shim disconnected" id=7c10c61519f103ea379c590b9237df399c1866188b03b385c6ac1dbda02e87c9 namespace=k8s.io Apr 13 19:21:01.979465 containerd[1489]: time="2026-04-13T19:21:01.979465152Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 13 19:21:02.012735 containerd[1489]: time="2026-04-13T19:21:02.012582495Z" level=warning msg="cleanup warnings time=\"2026-04-13T19:21:02Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Apr 13 19:21:02.032906 systemd[1]: Created slice kubepods-besteffort-podf07ad544_5f54_4fa0_9ff2_3bd524dd2c67.slice - libcontainer container kubepods-besteffort-podf07ad544_5f54_4fa0_9ff2_3bd524dd2c67.slice. Apr 13 19:21:02.054026 systemd[1]: Created slice kubepods-burstable-pod37df8aee_778a_40ce_bb48_a75145d05544.slice - libcontainer container kubepods-burstable-pod37df8aee_778a_40ce_bb48_a75145d05544.slice. Apr 13 19:21:02.068375 systemd[1]: Created slice kubepods-besteffort-podf5fa3599_3a44_4a37_a228_2b86b422f5c0.slice - libcontainer container kubepods-besteffort-podf5fa3599_3a44_4a37_a228_2b86b422f5c0.slice. Apr 13 19:21:02.083224 kubelet[2589]: I0413 19:21:02.083183 2589 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f07ad544-5f54-4fa0-9ff2-3bd524dd2c67-tigera-ca-bundle\") pod \"calico-kube-controllers-658544489c-rrlcl\" (UID: \"f07ad544-5f54-4fa0-9ff2-3bd524dd2c67\") " pod="calico-system/calico-kube-controllers-658544489c-rrlcl" Apr 13 19:21:02.083537 kubelet[2589]: I0413 19:21:02.083281 2589 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/3626b64e-cdfe-4c53-adb8-ae7098498212-nginx-config\") pod \"whisker-5f5f6bcd87-76n4z\" (UID: \"3626b64e-cdfe-4c53-adb8-ae7098498212\") " pod="calico-system/whisker-5f5f6bcd87-76n4z" Apr 13 19:21:02.083537 kubelet[2589]: I0413 19:21:02.083313 2589 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/3626b64e-cdfe-4c53-adb8-ae7098498212-whisker-backend-key-pair\") pod \"whisker-5f5f6bcd87-76n4z\" (UID: \"3626b64e-cdfe-4c53-adb8-ae7098498212\") " pod="calico-system/whisker-5f5f6bcd87-76n4z" Apr 13 19:21:02.083537 kubelet[2589]: I0413 19:21:02.083337 2589 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/a8e45c92-17a2-4c83-ac2b-aa6192ba9a42-calico-apiserver-certs\") pod \"calico-apiserver-7cd4769649-bxqw4\" (UID: \"a8e45c92-17a2-4c83-ac2b-aa6192ba9a42\") " pod="calico-system/calico-apiserver-7cd4769649-bxqw4" Apr 13 19:21:02.083537 kubelet[2589]: I0413 19:21:02.083394 2589 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ft67l\" (UniqueName: \"kubernetes.io/projected/f07ad544-5f54-4fa0-9ff2-3bd524dd2c67-kube-api-access-ft67l\") pod \"calico-kube-controllers-658544489c-rrlcl\" (UID: \"f07ad544-5f54-4fa0-9ff2-3bd524dd2c67\") " pod="calico-system/calico-kube-controllers-658544489c-rrlcl" Apr 13 19:21:02.083537 kubelet[2589]: I0413 19:21:02.083431 2589 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b8rvt\" (UniqueName: \"kubernetes.io/projected/81f175d8-d90c-45ef-9f27-602d07f6d20c-kube-api-access-b8rvt\") pod \"coredns-66bc5c9577-gxbxf\" (UID: \"81f175d8-d90c-45ef-9f27-602d07f6d20c\") " pod="kube-system/coredns-66bc5c9577-gxbxf" Apr 13 19:21:02.084943 kubelet[2589]: I0413 19:21:02.083450 2589 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/37df8aee-778a-40ce-bb48-a75145d05544-config-volume\") pod \"coredns-66bc5c9577-4mf58\" (UID: \"37df8aee-778a-40ce-bb48-a75145d05544\") " pod="kube-system/coredns-66bc5c9577-4mf58" Apr 13 19:21:02.084943 kubelet[2589]: I0413 19:21:02.083466 2589 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lshpj\" (UniqueName: \"kubernetes.io/projected/37df8aee-778a-40ce-bb48-a75145d05544-kube-api-access-lshpj\") pod \"coredns-66bc5c9577-4mf58\" (UID: \"37df8aee-778a-40ce-bb48-a75145d05544\") " pod="kube-system/coredns-66bc5c9577-4mf58" Apr 13 19:21:02.084943 kubelet[2589]: I0413 19:21:02.083806 2589 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/f5fa3599-3a44-4a37-a228-2b86b422f5c0-calico-apiserver-certs\") pod \"calico-apiserver-7cd4769649-d4nq8\" (UID: \"f5fa3599-3a44-4a37-a228-2b86b422f5c0\") " pod="calico-system/calico-apiserver-7cd4769649-d4nq8" Apr 13 19:21:02.084943 kubelet[2589]: I0413 19:21:02.083838 2589 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h4pvh\" (UniqueName: \"kubernetes.io/projected/a8e45c92-17a2-4c83-ac2b-aa6192ba9a42-kube-api-access-h4pvh\") pod \"calico-apiserver-7cd4769649-bxqw4\" (UID: \"a8e45c92-17a2-4c83-ac2b-aa6192ba9a42\") " pod="calico-system/calico-apiserver-7cd4769649-bxqw4" Apr 13 19:21:02.084943 kubelet[2589]: I0413 19:21:02.083858 2589 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3626b64e-cdfe-4c53-adb8-ae7098498212-whisker-ca-bundle\") pod \"whisker-5f5f6bcd87-76n4z\" (UID: \"3626b64e-cdfe-4c53-adb8-ae7098498212\") " pod="calico-system/whisker-5f5f6bcd87-76n4z" Apr 13 19:21:02.085101 kubelet[2589]: I0413 19:21:02.083888 2589 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s9wd6\" (UniqueName: \"kubernetes.io/projected/f5fa3599-3a44-4a37-a228-2b86b422f5c0-kube-api-access-s9wd6\") pod \"calico-apiserver-7cd4769649-d4nq8\" (UID: \"f5fa3599-3a44-4a37-a228-2b86b422f5c0\") " pod="calico-system/calico-apiserver-7cd4769649-d4nq8" Apr 13 19:21:02.085101 kubelet[2589]: I0413 19:21:02.083908 2589 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nsgcx\" (UniqueName: \"kubernetes.io/projected/3626b64e-cdfe-4c53-adb8-ae7098498212-kube-api-access-nsgcx\") pod \"whisker-5f5f6bcd87-76n4z\" (UID: \"3626b64e-cdfe-4c53-adb8-ae7098498212\") " pod="calico-system/whisker-5f5f6bcd87-76n4z" Apr 13 19:21:02.085101 kubelet[2589]: I0413 19:21:02.083924 2589 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/81f175d8-d90c-45ef-9f27-602d07f6d20c-config-volume\") pod \"coredns-66bc5c9577-gxbxf\" (UID: \"81f175d8-d90c-45ef-9f27-602d07f6d20c\") " pod="kube-system/coredns-66bc5c9577-gxbxf" Apr 13 19:21:02.085711 systemd[1]: Created slice kubepods-besteffort-poda8e45c92_17a2_4c83_ac2b_aa6192ba9a42.slice - libcontainer container kubepods-besteffort-poda8e45c92_17a2_4c83_ac2b_aa6192ba9a42.slice. Apr 13 19:21:02.101915 systemd[1]: Created slice kubepods-besteffort-pod3626b64e_cdfe_4c53_adb8_ae7098498212.slice - libcontainer container kubepods-besteffort-pod3626b64e_cdfe_4c53_adb8_ae7098498212.slice. Apr 13 19:21:02.111536 systemd[1]: Created slice kubepods-burstable-pod81f175d8_d90c_45ef_9f27_602d07f6d20c.slice - libcontainer container kubepods-burstable-pod81f175d8_d90c_45ef_9f27_602d07f6d20c.slice. Apr 13 19:21:02.124783 systemd[1]: Created slice kubepods-besteffort-podd1b93d06_471c_488c_b357_caa189e1a4ca.slice - libcontainer container kubepods-besteffort-podd1b93d06_471c_488c_b357_caa189e1a4ca.slice. Apr 13 19:21:02.188055 kubelet[2589]: I0413 19:21:02.185310 2589 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d1b93d06-471c-488c-b357-caa189e1a4ca-goldmane-ca-bundle\") pod \"goldmane-cccfbd5cf-tlznz\" (UID: \"d1b93d06-471c-488c-b357-caa189e1a4ca\") " pod="calico-system/goldmane-cccfbd5cf-tlznz" Apr 13 19:21:02.188055 kubelet[2589]: I0413 19:21:02.185363 2589 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gkphd\" (UniqueName: \"kubernetes.io/projected/d1b93d06-471c-488c-b357-caa189e1a4ca-kube-api-access-gkphd\") pod \"goldmane-cccfbd5cf-tlznz\" (UID: \"d1b93d06-471c-488c-b357-caa189e1a4ca\") " pod="calico-system/goldmane-cccfbd5cf-tlznz" Apr 13 19:21:02.188055 kubelet[2589]: I0413 19:21:02.185466 2589 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/d1b93d06-471c-488c-b357-caa189e1a4ca-goldmane-key-pair\") pod \"goldmane-cccfbd5cf-tlznz\" (UID: \"d1b93d06-471c-488c-b357-caa189e1a4ca\") " pod="calico-system/goldmane-cccfbd5cf-tlznz" Apr 13 19:21:02.188055 kubelet[2589]: I0413 19:21:02.185615 2589 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d1b93d06-471c-488c-b357-caa189e1a4ca-config\") pod \"goldmane-cccfbd5cf-tlznz\" (UID: \"d1b93d06-471c-488c-b357-caa189e1a4ca\") " pod="calico-system/goldmane-cccfbd5cf-tlznz" Apr 13 19:21:02.344306 containerd[1489]: time="2026-04-13T19:21:02.344252296Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-658544489c-rrlcl,Uid:f07ad544-5f54-4fa0-9ff2-3bd524dd2c67,Namespace:calico-system,Attempt:0,}" Apr 13 19:21:02.372518 containerd[1489]: time="2026-04-13T19:21:02.372342618Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-4mf58,Uid:37df8aee-778a-40ce-bb48-a75145d05544,Namespace:kube-system,Attempt:0,}" Apr 13 19:21:02.380549 containerd[1489]: time="2026-04-13T19:21:02.380505289Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7cd4769649-d4nq8,Uid:f5fa3599-3a44-4a37-a228-2b86b422f5c0,Namespace:calico-system,Attempt:0,}" Apr 13 19:21:02.396789 containerd[1489]: time="2026-04-13T19:21:02.396441305Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7cd4769649-bxqw4,Uid:a8e45c92-17a2-4c83-ac2b-aa6192ba9a42,Namespace:calico-system,Attempt:0,}" Apr 13 19:21:02.412500 containerd[1489]: time="2026-04-13T19:21:02.412100580Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5f5f6bcd87-76n4z,Uid:3626b64e-cdfe-4c53-adb8-ae7098498212,Namespace:calico-system,Attempt:0,}" Apr 13 19:21:02.423303 containerd[1489]: time="2026-04-13T19:21:02.423183012Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-gxbxf,Uid:81f175d8-d90c-45ef-9f27-602d07f6d20c,Namespace:kube-system,Attempt:0,}" Apr 13 19:21:02.436605 containerd[1489]: time="2026-04-13T19:21:02.436276084Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-cccfbd5cf-tlznz,Uid:d1b93d06-471c-488c-b357-caa189e1a4ca,Namespace:calico-system,Attempt:0,}" Apr 13 19:21:02.515033 containerd[1489]: time="2026-04-13T19:21:02.514875447Z" level=error msg="Failed to destroy network for sandbox \"b11cd43a0dab65ed7876dc9388adefbcef863ead095672f7bdd7daa57d5b1469\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 19:21:02.518255 containerd[1489]: time="2026-04-13T19:21:02.518190374Z" level=error msg="encountered an error cleaning up failed sandbox \"b11cd43a0dab65ed7876dc9388adefbcef863ead095672f7bdd7daa57d5b1469\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 19:21:02.518735 containerd[1489]: time="2026-04-13T19:21:02.518685683Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-658544489c-rrlcl,Uid:f07ad544-5f54-4fa0-9ff2-3bd524dd2c67,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b11cd43a0dab65ed7876dc9388adefbcef863ead095672f7bdd7daa57d5b1469\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 19:21:02.519731 kubelet[2589]: E0413 19:21:02.519259 2589 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b11cd43a0dab65ed7876dc9388adefbcef863ead095672f7bdd7daa57d5b1469\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 19:21:02.519731 kubelet[2589]: E0413 19:21:02.519330 2589 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b11cd43a0dab65ed7876dc9388adefbcef863ead095672f7bdd7daa57d5b1469\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-658544489c-rrlcl" Apr 13 19:21:02.519731 kubelet[2589]: E0413 19:21:02.519350 2589 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b11cd43a0dab65ed7876dc9388adefbcef863ead095672f7bdd7daa57d5b1469\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-658544489c-rrlcl" Apr 13 19:21:02.519981 kubelet[2589]: E0413 19:21:02.519404 2589 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-658544489c-rrlcl_calico-system(f07ad544-5f54-4fa0-9ff2-3bd524dd2c67)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-658544489c-rrlcl_calico-system(f07ad544-5f54-4fa0-9ff2-3bd524dd2c67)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b11cd43a0dab65ed7876dc9388adefbcef863ead095672f7bdd7daa57d5b1469\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-658544489c-rrlcl" podUID="f07ad544-5f54-4fa0-9ff2-3bd524dd2c67" Apr 13 19:21:02.600039 systemd[1]: Created slice kubepods-besteffort-pod4e1c37d2_8528_4b9f_9fc1_29276f518f72.slice - libcontainer container kubepods-besteffort-pod4e1c37d2_8528_4b9f_9fc1_29276f518f72.slice. Apr 13 19:21:02.609414 containerd[1489]: time="2026-04-13T19:21:02.608987333Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-t82fb,Uid:4e1c37d2-8528-4b9f-9fc1-29276f518f72,Namespace:calico-system,Attempt:0,}" Apr 13 19:21:02.612252 containerd[1489]: time="2026-04-13T19:21:02.612192876Z" level=error msg="Failed to destroy network for sandbox \"5256e062bc75846b331872bf0764115c5c34e5ad18b17031951ea827fe311a3c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 19:21:02.612670 containerd[1489]: time="2026-04-13T19:21:02.612623130Z" level=error msg="encountered an error cleaning up failed sandbox \"5256e062bc75846b331872bf0764115c5c34e5ad18b17031951ea827fe311a3c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 19:21:02.614371 containerd[1489]: time="2026-04-13T19:21:02.614124180Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-4mf58,Uid:37df8aee-778a-40ce-bb48-a75145d05544,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"5256e062bc75846b331872bf0764115c5c34e5ad18b17031951ea827fe311a3c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 19:21:02.616503 kubelet[2589]: E0413 19:21:02.615175 2589 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5256e062bc75846b331872bf0764115c5c34e5ad18b17031951ea827fe311a3c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 19:21:02.616503 kubelet[2589]: E0413 19:21:02.615243 2589 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5256e062bc75846b331872bf0764115c5c34e5ad18b17031951ea827fe311a3c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-4mf58" Apr 13 19:21:02.616503 kubelet[2589]: E0413 19:21:02.615264 2589 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5256e062bc75846b331872bf0764115c5c34e5ad18b17031951ea827fe311a3c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-4mf58" Apr 13 19:21:02.616725 kubelet[2589]: E0413 19:21:02.615319 2589 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-4mf58_kube-system(37df8aee-778a-40ce-bb48-a75145d05544)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-4mf58_kube-system(37df8aee-778a-40ce-bb48-a75145d05544)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5256e062bc75846b331872bf0764115c5c34e5ad18b17031951ea827fe311a3c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-4mf58" podUID="37df8aee-778a-40ce-bb48-a75145d05544" Apr 13 19:21:02.622079 containerd[1489]: time="2026-04-13T19:21:02.621737530Z" level=error msg="Failed to destroy network for sandbox \"de15ca58bacce88132857c6397993b4f8e430764ca12d55bf7f802cb4853d684\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 19:21:02.622581 containerd[1489]: time="2026-04-13T19:21:02.622509739Z" level=error msg="encountered an error cleaning up failed sandbox \"de15ca58bacce88132857c6397993b4f8e430764ca12d55bf7f802cb4853d684\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 19:21:02.622642 containerd[1489]: time="2026-04-13T19:21:02.622602800Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7cd4769649-d4nq8,Uid:f5fa3599-3a44-4a37-a228-2b86b422f5c0,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"de15ca58bacce88132857c6397993b4f8e430764ca12d55bf7f802cb4853d684\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 19:21:02.623091 kubelet[2589]: E0413 19:21:02.623044 2589 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"de15ca58bacce88132857c6397993b4f8e430764ca12d55bf7f802cb4853d684\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 19:21:02.623202 kubelet[2589]: E0413 19:21:02.623109 2589 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"de15ca58bacce88132857c6397993b4f8e430764ca12d55bf7f802cb4853d684\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-7cd4769649-d4nq8" Apr 13 19:21:02.623202 kubelet[2589]: E0413 19:21:02.623130 2589 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"de15ca58bacce88132857c6397993b4f8e430764ca12d55bf7f802cb4853d684\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-7cd4769649-d4nq8" Apr 13 19:21:02.623284 kubelet[2589]: E0413 19:21:02.623190 2589 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7cd4769649-d4nq8_calico-system(f5fa3599-3a44-4a37-a228-2b86b422f5c0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7cd4769649-d4nq8_calico-system(f5fa3599-3a44-4a37-a228-2b86b422f5c0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"de15ca58bacce88132857c6397993b4f8e430764ca12d55bf7f802cb4853d684\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-7cd4769649-d4nq8" podUID="f5fa3599-3a44-4a37-a228-2b86b422f5c0" Apr 13 19:21:02.681043 containerd[1489]: time="2026-04-13T19:21:02.679971825Z" level=error msg="Failed to destroy network for sandbox \"5b60475e7466061d777dc0f1b92a6ca6b85a2cfaf724cd0a86b1bd19b5927131\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 19:21:02.681562 containerd[1489]: time="2026-04-13T19:21:02.681485997Z" level=error msg="encountered an error cleaning up failed sandbox \"5b60475e7466061d777dc0f1b92a6ca6b85a2cfaf724cd0a86b1bd19b5927131\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 19:21:02.681622 containerd[1489]: time="2026-04-13T19:21:02.681589700Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7cd4769649-bxqw4,Uid:a8e45c92-17a2-4c83-ac2b-aa6192ba9a42,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"5b60475e7466061d777dc0f1b92a6ca6b85a2cfaf724cd0a86b1bd19b5927131\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 19:21:02.681892 kubelet[2589]: E0413 19:21:02.681850 2589 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5b60475e7466061d777dc0f1b92a6ca6b85a2cfaf724cd0a86b1bd19b5927131\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 19:21:02.682758 kubelet[2589]: E0413 19:21:02.682718 2589 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5b60475e7466061d777dc0f1b92a6ca6b85a2cfaf724cd0a86b1bd19b5927131\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-7cd4769649-bxqw4" Apr 13 19:21:02.682908 kubelet[2589]: E0413 19:21:02.682888 2589 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5b60475e7466061d777dc0f1b92a6ca6b85a2cfaf724cd0a86b1bd19b5927131\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-7cd4769649-bxqw4" Apr 13 19:21:02.684112 kubelet[2589]: E0413 19:21:02.683057 2589 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7cd4769649-bxqw4_calico-system(a8e45c92-17a2-4c83-ac2b-aa6192ba9a42)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7cd4769649-bxqw4_calico-system(a8e45c92-17a2-4c83-ac2b-aa6192ba9a42)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5b60475e7466061d777dc0f1b92a6ca6b85a2cfaf724cd0a86b1bd19b5927131\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-7cd4769649-bxqw4" podUID="a8e45c92-17a2-4c83-ac2b-aa6192ba9a42" Apr 13 19:21:02.684439 containerd[1489]: time="2026-04-13T19:21:02.684378632Z" level=error msg="Failed to destroy network for sandbox \"d44bc0200b39031551c9872907c7f685d1954a93f2a8209c63b99c5269887c94\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 19:21:02.685088 containerd[1489]: time="2026-04-13T19:21:02.684759515Z" level=error msg="encountered an error cleaning up failed sandbox \"d44bc0200b39031551c9872907c7f685d1954a93f2a8209c63b99c5269887c94\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 19:21:02.685325 containerd[1489]: time="2026-04-13T19:21:02.684820369Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5f5f6bcd87-76n4z,Uid:3626b64e-cdfe-4c53-adb8-ae7098498212,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d44bc0200b39031551c9872907c7f685d1954a93f2a8209c63b99c5269887c94\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 19:21:02.685447 kubelet[2589]: E0413 19:21:02.685409 2589 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d44bc0200b39031551c9872907c7f685d1954a93f2a8209c63b99c5269887c94\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 19:21:02.685788 kubelet[2589]: E0413 19:21:02.685461 2589 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d44bc0200b39031551c9872907c7f685d1954a93f2a8209c63b99c5269887c94\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5f5f6bcd87-76n4z" Apr 13 19:21:02.685788 kubelet[2589]: E0413 19:21:02.685485 2589 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d44bc0200b39031551c9872907c7f685d1954a93f2a8209c63b99c5269887c94\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5f5f6bcd87-76n4z" Apr 13 19:21:02.685788 kubelet[2589]: E0413 19:21:02.685587 2589 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-5f5f6bcd87-76n4z_calico-system(3626b64e-cdfe-4c53-adb8-ae7098498212)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-5f5f6bcd87-76n4z_calico-system(3626b64e-cdfe-4c53-adb8-ae7098498212)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d44bc0200b39031551c9872907c7f685d1954a93f2a8209c63b99c5269887c94\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-5f5f6bcd87-76n4z" podUID="3626b64e-cdfe-4c53-adb8-ae7098498212" Apr 13 19:21:02.687665 containerd[1489]: time="2026-04-13T19:21:02.687609781Z" level=error msg="Failed to destroy network for sandbox \"0d33cef8cb08a923763dbd2b1cc01b5badd2828378c2902e8c3c77f1000c85f0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 19:21:02.688288 containerd[1489]: time="2026-04-13T19:21:02.688253922Z" level=error msg="encountered an error cleaning up failed sandbox \"0d33cef8cb08a923763dbd2b1cc01b5badd2828378c2902e8c3c77f1000c85f0\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 19:21:02.688355 containerd[1489]: time="2026-04-13T19:21:02.688332899Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-cccfbd5cf-tlznz,Uid:d1b93d06-471c-488c-b357-caa189e1a4ca,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"0d33cef8cb08a923763dbd2b1cc01b5badd2828378c2902e8c3c77f1000c85f0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 19:21:02.688922 kubelet[2589]: E0413 19:21:02.688621 2589 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0d33cef8cb08a923763dbd2b1cc01b5badd2828378c2902e8c3c77f1000c85f0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 19:21:02.690543 kubelet[2589]: E0413 19:21:02.688724 2589 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0d33cef8cb08a923763dbd2b1cc01b5badd2828378c2902e8c3c77f1000c85f0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-cccfbd5cf-tlznz" Apr 13 19:21:02.690543 kubelet[2589]: E0413 19:21:02.689905 2589 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0d33cef8cb08a923763dbd2b1cc01b5badd2828378c2902e8c3c77f1000c85f0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-cccfbd5cf-tlznz" Apr 13 19:21:02.690543 kubelet[2589]: E0413 19:21:02.689968 2589 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-cccfbd5cf-tlznz_calico-system(d1b93d06-471c-488c-b357-caa189e1a4ca)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-cccfbd5cf-tlznz_calico-system(d1b93d06-471c-488c-b357-caa189e1a4ca)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0d33cef8cb08a923763dbd2b1cc01b5badd2828378c2902e8c3c77f1000c85f0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-cccfbd5cf-tlznz" podUID="d1b93d06-471c-488c-b357-caa189e1a4ca" Apr 13 19:21:02.722684 containerd[1489]: time="2026-04-13T19:21:02.722612700Z" level=error msg="Failed to destroy network for sandbox \"a0bdc0b52934e5ce608ded3fcc82278b9d7df978e6cb5160774f22130884d4fa\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 19:21:02.724988 containerd[1489]: time="2026-04-13T19:21:02.723246119Z" level=error msg="encountered an error cleaning up failed sandbox \"a0bdc0b52934e5ce608ded3fcc82278b9d7df978e6cb5160774f22130884d4fa\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 19:21:02.724988 containerd[1489]: time="2026-04-13T19:21:02.723324816Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-gxbxf,Uid:81f175d8-d90c-45ef-9f27-602d07f6d20c,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a0bdc0b52934e5ce608ded3fcc82278b9d7df978e6cb5160774f22130884d4fa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 19:21:02.725242 kubelet[2589]: E0413 19:21:02.723601 2589 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a0bdc0b52934e5ce608ded3fcc82278b9d7df978e6cb5160774f22130884d4fa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 19:21:02.725242 kubelet[2589]: E0413 19:21:02.723662 2589 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a0bdc0b52934e5ce608ded3fcc82278b9d7df978e6cb5160774f22130884d4fa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-gxbxf" Apr 13 19:21:02.725242 kubelet[2589]: E0413 19:21:02.723681 2589 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a0bdc0b52934e5ce608ded3fcc82278b9d7df978e6cb5160774f22130884d4fa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-gxbxf" Apr 13 19:21:02.725391 kubelet[2589]: E0413 19:21:02.723736 2589 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-gxbxf_kube-system(81f175d8-d90c-45ef-9f27-602d07f6d20c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-gxbxf_kube-system(81f175d8-d90c-45ef-9f27-602d07f6d20c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a0bdc0b52934e5ce608ded3fcc82278b9d7df978e6cb5160774f22130884d4fa\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-gxbxf" podUID="81f175d8-d90c-45ef-9f27-602d07f6d20c" Apr 13 19:21:02.744586 containerd[1489]: time="2026-04-13T19:21:02.744511424Z" level=error msg="Failed to destroy network for sandbox \"045cd65edaa6c8f18fe5606fbaf84468781a0a29bd5558933eb0035e4743a128\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 19:21:02.745161 containerd[1489]: time="2026-04-13T19:21:02.745124438Z" level=error msg="encountered an error cleaning up failed sandbox \"045cd65edaa6c8f18fe5606fbaf84468781a0a29bd5558933eb0035e4743a128\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 19:21:02.745415 containerd[1489]: time="2026-04-13T19:21:02.745385055Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-t82fb,Uid:4e1c37d2-8528-4b9f-9fc1-29276f518f72,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"045cd65edaa6c8f18fe5606fbaf84468781a0a29bd5558933eb0035e4743a128\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 19:21:02.745829 kubelet[2589]: E0413 19:21:02.745781 2589 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"045cd65edaa6c8f18fe5606fbaf84468781a0a29bd5558933eb0035e4743a128\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 19:21:02.745915 kubelet[2589]: E0413 19:21:02.745850 2589 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"045cd65edaa6c8f18fe5606fbaf84468781a0a29bd5558933eb0035e4743a128\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-t82fb" Apr 13 19:21:02.745915 kubelet[2589]: E0413 19:21:02.745892 2589 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"045cd65edaa6c8f18fe5606fbaf84468781a0a29bd5558933eb0035e4743a128\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-t82fb" Apr 13 19:21:02.745970 kubelet[2589]: E0413 19:21:02.745946 2589 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-t82fb_calico-system(4e1c37d2-8528-4b9f-9fc1-29276f518f72)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-t82fb_calico-system(4e1c37d2-8528-4b9f-9fc1-29276f518f72)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"045cd65edaa6c8f18fe5606fbaf84468781a0a29bd5558933eb0035e4743a128\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-t82fb" podUID="4e1c37d2-8528-4b9f-9fc1-29276f518f72" Apr 13 19:21:02.811628 kubelet[2589]: I0413 19:21:02.811480 2589 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0d33cef8cb08a923763dbd2b1cc01b5badd2828378c2902e8c3c77f1000c85f0" Apr 13 19:21:02.817283 containerd[1489]: time="2026-04-13T19:21:02.816645808Z" level=info msg="StopPodSandbox for \"0d33cef8cb08a923763dbd2b1cc01b5badd2828378c2902e8c3c77f1000c85f0\"" Apr 13 19:21:02.817283 containerd[1489]: time="2026-04-13T19:21:02.816912387Z" level=info msg="Ensure that sandbox 0d33cef8cb08a923763dbd2b1cc01b5badd2828378c2902e8c3c77f1000c85f0 in task-service has been cleanup successfully" Apr 13 19:21:02.823980 kubelet[2589]: I0413 19:21:02.823358 2589 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="de15ca58bacce88132857c6397993b4f8e430764ca12d55bf7f802cb4853d684" Apr 13 19:21:02.828359 containerd[1489]: time="2026-04-13T19:21:02.826821561Z" level=info msg="StopPodSandbox for \"de15ca58bacce88132857c6397993b4f8e430764ca12d55bf7f802cb4853d684\"" Apr 13 19:21:02.828359 containerd[1489]: time="2026-04-13T19:21:02.828044629Z" level=info msg="Ensure that sandbox de15ca58bacce88132857c6397993b4f8e430764ca12d55bf7f802cb4853d684 in task-service has been cleanup successfully" Apr 13 19:21:02.830322 kubelet[2589]: I0413 19:21:02.830286 2589 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5b60475e7466061d777dc0f1b92a6ca6b85a2cfaf724cd0a86b1bd19b5927131" Apr 13 19:21:02.836431 containerd[1489]: time="2026-04-13T19:21:02.836386459Z" level=info msg="StopPodSandbox for \"5b60475e7466061d777dc0f1b92a6ca6b85a2cfaf724cd0a86b1bd19b5927131\"" Apr 13 19:21:02.838001 containerd[1489]: time="2026-04-13T19:21:02.837917395Z" level=info msg="Ensure that sandbox 5b60475e7466061d777dc0f1b92a6ca6b85a2cfaf724cd0a86b1bd19b5927131 in task-service has been cleanup successfully" Apr 13 19:21:02.841126 kubelet[2589]: I0413 19:21:02.841041 2589 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="045cd65edaa6c8f18fe5606fbaf84468781a0a29bd5558933eb0035e4743a128" Apr 13 19:21:02.845042 containerd[1489]: time="2026-04-13T19:21:02.844393015Z" level=info msg="StopPodSandbox for \"045cd65edaa6c8f18fe5606fbaf84468781a0a29bd5558933eb0035e4743a128\"" Apr 13 19:21:02.847865 containerd[1489]: time="2026-04-13T19:21:02.845288572Z" level=info msg="Ensure that sandbox 045cd65edaa6c8f18fe5606fbaf84468781a0a29bd5558933eb0035e4743a128 in task-service has been cleanup successfully" Apr 13 19:21:02.853815 kubelet[2589]: I0413 19:21:02.853778 2589 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a0bdc0b52934e5ce608ded3fcc82278b9d7df978e6cb5160774f22130884d4fa" Apr 13 19:21:02.855369 containerd[1489]: time="2026-04-13T19:21:02.845786681Z" level=info msg="CreateContainer within sandbox \"3de2203ec57f9f8c1123a389080fed13bc2b8492d498777785ab6d99987db515\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Apr 13 19:21:02.864923 containerd[1489]: time="2026-04-13T19:21:02.864875829Z" level=info msg="StopPodSandbox for \"a0bdc0b52934e5ce608ded3fcc82278b9d7df978e6cb5160774f22130884d4fa\"" Apr 13 19:21:02.866861 containerd[1489]: time="2026-04-13T19:21:02.866532592Z" level=info msg="Ensure that sandbox a0bdc0b52934e5ce608ded3fcc82278b9d7df978e6cb5160774f22130884d4fa in task-service has been cleanup successfully" Apr 13 19:21:02.868293 kubelet[2589]: I0413 19:21:02.868242 2589 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d44bc0200b39031551c9872907c7f685d1954a93f2a8209c63b99c5269887c94" Apr 13 19:21:02.871357 containerd[1489]: time="2026-04-13T19:21:02.870862622Z" level=info msg="StopPodSandbox for \"d44bc0200b39031551c9872907c7f685d1954a93f2a8209c63b99c5269887c94\"" Apr 13 19:21:02.871357 containerd[1489]: time="2026-04-13T19:21:02.871174971Z" level=info msg="Ensure that sandbox d44bc0200b39031551c9872907c7f685d1954a93f2a8209c63b99c5269887c94 in task-service has been cleanup successfully" Apr 13 19:21:02.878726 kubelet[2589]: I0413 19:21:02.878424 2589 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5256e062bc75846b331872bf0764115c5c34e5ad18b17031951ea827fe311a3c" Apr 13 19:21:02.883144 containerd[1489]: time="2026-04-13T19:21:02.882535983Z" level=info msg="StopPodSandbox for \"5256e062bc75846b331872bf0764115c5c34e5ad18b17031951ea827fe311a3c\"" Apr 13 19:21:02.885919 containerd[1489]: time="2026-04-13T19:21:02.885181763Z" level=info msg="Ensure that sandbox 5256e062bc75846b331872bf0764115c5c34e5ad18b17031951ea827fe311a3c in task-service has been cleanup successfully" Apr 13 19:21:02.897715 kubelet[2589]: I0413 19:21:02.897437 2589 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b11cd43a0dab65ed7876dc9388adefbcef863ead095672f7bdd7daa57d5b1469" Apr 13 19:21:02.907401 containerd[1489]: time="2026-04-13T19:21:02.907348426Z" level=info msg="StopPodSandbox for \"b11cd43a0dab65ed7876dc9388adefbcef863ead095672f7bdd7daa57d5b1469\"" Apr 13 19:21:02.907860 containerd[1489]: time="2026-04-13T19:21:02.907804926Z" level=info msg="Ensure that sandbox b11cd43a0dab65ed7876dc9388adefbcef863ead095672f7bdd7daa57d5b1469 in task-service has been cleanup successfully" Apr 13 19:21:02.976102 containerd[1489]: time="2026-04-13T19:21:02.975954317Z" level=error msg="StopPodSandbox for \"5b60475e7466061d777dc0f1b92a6ca6b85a2cfaf724cd0a86b1bd19b5927131\" failed" error="failed to destroy network for sandbox \"5b60475e7466061d777dc0f1b92a6ca6b85a2cfaf724cd0a86b1bd19b5927131\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 19:21:02.976641 kubelet[2589]: E0413 19:21:02.976467 2589 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"5b60475e7466061d777dc0f1b92a6ca6b85a2cfaf724cd0a86b1bd19b5927131\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="5b60475e7466061d777dc0f1b92a6ca6b85a2cfaf724cd0a86b1bd19b5927131" Apr 13 19:21:02.976641 kubelet[2589]: E0413 19:21:02.976532 2589 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"5b60475e7466061d777dc0f1b92a6ca6b85a2cfaf724cd0a86b1bd19b5927131"} Apr 13 19:21:02.976641 kubelet[2589]: E0413 19:21:02.976605 2589 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a8e45c92-17a2-4c83-ac2b-aa6192ba9a42\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5b60475e7466061d777dc0f1b92a6ca6b85a2cfaf724cd0a86b1bd19b5927131\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 13 19:21:02.976641 kubelet[2589]: E0413 19:21:02.976636 2589 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a8e45c92-17a2-4c83-ac2b-aa6192ba9a42\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5b60475e7466061d777dc0f1b92a6ca6b85a2cfaf724cd0a86b1bd19b5927131\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-7cd4769649-bxqw4" podUID="a8e45c92-17a2-4c83-ac2b-aa6192ba9a42" Apr 13 19:21:02.978776 containerd[1489]: time="2026-04-13T19:21:02.978718963Z" level=info msg="CreateContainer within sandbox \"3de2203ec57f9f8c1123a389080fed13bc2b8492d498777785ab6d99987db515\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"497a8eba0c077fce5c52f5ae189da2b27f93943399ed9b59a68d127744894234\"" Apr 13 19:21:02.983410 containerd[1489]: time="2026-04-13T19:21:02.982412654Z" level=info msg="StartContainer for \"497a8eba0c077fce5c52f5ae189da2b27f93943399ed9b59a68d127744894234\"" Apr 13 19:21:02.999057 containerd[1489]: time="2026-04-13T19:21:02.997239826Z" level=error msg="StopPodSandbox for \"a0bdc0b52934e5ce608ded3fcc82278b9d7df978e6cb5160774f22130884d4fa\" failed" error="failed to destroy network for sandbox \"a0bdc0b52934e5ce608ded3fcc82278b9d7df978e6cb5160774f22130884d4fa\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 19:21:02.999057 containerd[1489]: time="2026-04-13T19:21:02.997732094Z" level=error msg="StopPodSandbox for \"de15ca58bacce88132857c6397993b4f8e430764ca12d55bf7f802cb4853d684\" failed" error="failed to destroy network for sandbox \"de15ca58bacce88132857c6397993b4f8e430764ca12d55bf7f802cb4853d684\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 19:21:02.999231 kubelet[2589]: E0413 19:21:02.997504 2589 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a0bdc0b52934e5ce608ded3fcc82278b9d7df978e6cb5160774f22130884d4fa\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a0bdc0b52934e5ce608ded3fcc82278b9d7df978e6cb5160774f22130884d4fa" Apr 13 19:21:02.999231 kubelet[2589]: E0413 19:21:02.997561 2589 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a0bdc0b52934e5ce608ded3fcc82278b9d7df978e6cb5160774f22130884d4fa"} Apr 13 19:21:02.999231 kubelet[2589]: E0413 19:21:02.997642 2589 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"81f175d8-d90c-45ef-9f27-602d07f6d20c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a0bdc0b52934e5ce608ded3fcc82278b9d7df978e6cb5160774f22130884d4fa\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 13 19:21:02.999231 kubelet[2589]: E0413 19:21:02.997672 2589 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"81f175d8-d90c-45ef-9f27-602d07f6d20c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a0bdc0b52934e5ce608ded3fcc82278b9d7df978e6cb5160774f22130884d4fa\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-gxbxf" podUID="81f175d8-d90c-45ef-9f27-602d07f6d20c" Apr 13 19:21:03.000250 kubelet[2589]: E0413 19:21:02.999571 2589 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"de15ca58bacce88132857c6397993b4f8e430764ca12d55bf7f802cb4853d684\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="de15ca58bacce88132857c6397993b4f8e430764ca12d55bf7f802cb4853d684" Apr 13 19:21:03.000377 kubelet[2589]: E0413 19:21:03.000273 2589 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"de15ca58bacce88132857c6397993b4f8e430764ca12d55bf7f802cb4853d684"} Apr 13 19:21:03.000377 kubelet[2589]: E0413 19:21:03.000313 2589 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"f5fa3599-3a44-4a37-a228-2b86b422f5c0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"de15ca58bacce88132857c6397993b4f8e430764ca12d55bf7f802cb4853d684\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 13 19:21:03.000377 kubelet[2589]: E0413 19:21:03.000347 2589 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"f5fa3599-3a44-4a37-a228-2b86b422f5c0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"de15ca58bacce88132857c6397993b4f8e430764ca12d55bf7f802cb4853d684\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-7cd4769649-d4nq8" podUID="f5fa3599-3a44-4a37-a228-2b86b422f5c0" Apr 13 19:21:03.024804 containerd[1489]: time="2026-04-13T19:21:03.024749538Z" level=error msg="StopPodSandbox for \"0d33cef8cb08a923763dbd2b1cc01b5badd2828378c2902e8c3c77f1000c85f0\" failed" error="failed to destroy network for sandbox \"0d33cef8cb08a923763dbd2b1cc01b5badd2828378c2902e8c3c77f1000c85f0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 19:21:03.025298 kubelet[2589]: E0413 19:21:03.025247 2589 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"0d33cef8cb08a923763dbd2b1cc01b5badd2828378c2902e8c3c77f1000c85f0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="0d33cef8cb08a923763dbd2b1cc01b5badd2828378c2902e8c3c77f1000c85f0" Apr 13 19:21:03.025381 kubelet[2589]: E0413 19:21:03.025304 2589 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"0d33cef8cb08a923763dbd2b1cc01b5badd2828378c2902e8c3c77f1000c85f0"} Apr 13 19:21:03.025381 kubelet[2589]: E0413 19:21:03.025342 2589 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d1b93d06-471c-488c-b357-caa189e1a4ca\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0d33cef8cb08a923763dbd2b1cc01b5badd2828378c2902e8c3c77f1000c85f0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 13 19:21:03.025381 kubelet[2589]: E0413 19:21:03.025369 2589 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d1b93d06-471c-488c-b357-caa189e1a4ca\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0d33cef8cb08a923763dbd2b1cc01b5badd2828378c2902e8c3c77f1000c85f0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-cccfbd5cf-tlznz" podUID="d1b93d06-471c-488c-b357-caa189e1a4ca" Apr 13 19:21:03.035583 containerd[1489]: time="2026-04-13T19:21:03.035522106Z" level=error msg="StopPodSandbox for \"045cd65edaa6c8f18fe5606fbaf84468781a0a29bd5558933eb0035e4743a128\" failed" error="failed to destroy network for sandbox \"045cd65edaa6c8f18fe5606fbaf84468781a0a29bd5558933eb0035e4743a128\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 19:21:03.035850 kubelet[2589]: E0413 19:21:03.035803 2589 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"045cd65edaa6c8f18fe5606fbaf84468781a0a29bd5558933eb0035e4743a128\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="045cd65edaa6c8f18fe5606fbaf84468781a0a29bd5558933eb0035e4743a128" Apr 13 19:21:03.035902 kubelet[2589]: E0413 19:21:03.035858 2589 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"045cd65edaa6c8f18fe5606fbaf84468781a0a29bd5558933eb0035e4743a128"} Apr 13 19:21:03.035902 kubelet[2589]: E0413 19:21:03.035892 2589 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"4e1c37d2-8528-4b9f-9fc1-29276f518f72\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"045cd65edaa6c8f18fe5606fbaf84468781a0a29bd5558933eb0035e4743a128\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 13 19:21:03.036007 kubelet[2589]: E0413 19:21:03.035918 2589 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"4e1c37d2-8528-4b9f-9fc1-29276f518f72\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"045cd65edaa6c8f18fe5606fbaf84468781a0a29bd5558933eb0035e4743a128\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-t82fb" podUID="4e1c37d2-8528-4b9f-9fc1-29276f518f72" Apr 13 19:21:03.045238 containerd[1489]: time="2026-04-13T19:21:03.045172877Z" level=error msg="StopPodSandbox for \"d44bc0200b39031551c9872907c7f685d1954a93f2a8209c63b99c5269887c94\" failed" error="failed to destroy network for sandbox \"d44bc0200b39031551c9872907c7f685d1954a93f2a8209c63b99c5269887c94\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 19:21:03.045623 kubelet[2589]: E0413 19:21:03.045556 2589 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d44bc0200b39031551c9872907c7f685d1954a93f2a8209c63b99c5269887c94\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d44bc0200b39031551c9872907c7f685d1954a93f2a8209c63b99c5269887c94" Apr 13 19:21:03.045927 kubelet[2589]: E0413 19:21:03.045888 2589 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d44bc0200b39031551c9872907c7f685d1954a93f2a8209c63b99c5269887c94"} Apr 13 19:21:03.045975 kubelet[2589]: E0413 19:21:03.045949 2589 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"3626b64e-cdfe-4c53-adb8-ae7098498212\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d44bc0200b39031551c9872907c7f685d1954a93f2a8209c63b99c5269887c94\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 13 19:21:03.046080 kubelet[2589]: E0413 19:21:03.045980 2589 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"3626b64e-cdfe-4c53-adb8-ae7098498212\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d44bc0200b39031551c9872907c7f685d1954a93f2a8209c63b99c5269887c94\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-5f5f6bcd87-76n4z" podUID="3626b64e-cdfe-4c53-adb8-ae7098498212" Apr 13 19:21:03.051158 containerd[1489]: time="2026-04-13T19:21:03.051092775Z" level=error msg="StopPodSandbox for \"b11cd43a0dab65ed7876dc9388adefbcef863ead095672f7bdd7daa57d5b1469\" failed" error="failed to destroy network for sandbox \"b11cd43a0dab65ed7876dc9388adefbcef863ead095672f7bdd7daa57d5b1469\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 19:21:03.051400 kubelet[2589]: E0413 19:21:03.051354 2589 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b11cd43a0dab65ed7876dc9388adefbcef863ead095672f7bdd7daa57d5b1469\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b11cd43a0dab65ed7876dc9388adefbcef863ead095672f7bdd7daa57d5b1469" Apr 13 19:21:03.051463 kubelet[2589]: E0413 19:21:03.051412 2589 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b11cd43a0dab65ed7876dc9388adefbcef863ead095672f7bdd7daa57d5b1469"} Apr 13 19:21:03.051463 kubelet[2589]: E0413 19:21:03.051445 2589 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"f07ad544-5f54-4fa0-9ff2-3bd524dd2c67\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b11cd43a0dab65ed7876dc9388adefbcef863ead095672f7bdd7daa57d5b1469\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 13 19:21:03.051543 kubelet[2589]: E0413 19:21:03.051480 2589 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"f07ad544-5f54-4fa0-9ff2-3bd524dd2c67\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b11cd43a0dab65ed7876dc9388adefbcef863ead095672f7bdd7daa57d5b1469\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-658544489c-rrlcl" podUID="f07ad544-5f54-4fa0-9ff2-3bd524dd2c67" Apr 13 19:21:03.056481 containerd[1489]: time="2026-04-13T19:21:03.056426268Z" level=error msg="StopPodSandbox for \"5256e062bc75846b331872bf0764115c5c34e5ad18b17031951ea827fe311a3c\" failed" error="failed to destroy network for sandbox \"5256e062bc75846b331872bf0764115c5c34e5ad18b17031951ea827fe311a3c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 19:21:03.057191 kubelet[2589]: E0413 19:21:03.056823 2589 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"5256e062bc75846b331872bf0764115c5c34e5ad18b17031951ea827fe311a3c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="5256e062bc75846b331872bf0764115c5c34e5ad18b17031951ea827fe311a3c" Apr 13 19:21:03.057191 kubelet[2589]: E0413 19:21:03.056882 2589 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"5256e062bc75846b331872bf0764115c5c34e5ad18b17031951ea827fe311a3c"} Apr 13 19:21:03.057191 kubelet[2589]: E0413 19:21:03.056915 2589 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"37df8aee-778a-40ce-bb48-a75145d05544\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5256e062bc75846b331872bf0764115c5c34e5ad18b17031951ea827fe311a3c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 13 19:21:03.057191 kubelet[2589]: E0413 19:21:03.056948 2589 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"37df8aee-778a-40ce-bb48-a75145d05544\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5256e062bc75846b331872bf0764115c5c34e5ad18b17031951ea827fe311a3c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-4mf58" podUID="37df8aee-778a-40ce-bb48-a75145d05544" Apr 13 19:21:03.064308 systemd[1]: Started cri-containerd-497a8eba0c077fce5c52f5ae189da2b27f93943399ed9b59a68d127744894234.scope - libcontainer container 497a8eba0c077fce5c52f5ae189da2b27f93943399ed9b59a68d127744894234. Apr 13 19:21:03.104372 containerd[1489]: time="2026-04-13T19:21:03.104312643Z" level=info msg="StartContainer for \"497a8eba0c077fce5c52f5ae189da2b27f93943399ed9b59a68d127744894234\" returns successfully" Apr 13 19:21:03.910663 containerd[1489]: time="2026-04-13T19:21:03.910299734Z" level=info msg="StopPodSandbox for \"d44bc0200b39031551c9872907c7f685d1954a93f2a8209c63b99c5269887c94\"" Apr 13 19:21:03.944684 systemd[1]: run-containerd-runc-k8s.io-497a8eba0c077fce5c52f5ae189da2b27f93943399ed9b59a68d127744894234-runc.Iyn7HO.mount: Deactivated successfully. Apr 13 19:21:03.968576 kubelet[2589]: I0413 19:21:03.968518 2589 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-dgb58" podStartSLOduration=3.980629713 podStartE2EDuration="19.968490178s" podCreationTimestamp="2026-04-13 19:20:44 +0000 UTC" firstStartedPulling="2026-04-13 19:20:45.208303081 +0000 UTC m=+21.772014247" lastFinishedPulling="2026-04-13 19:21:01.196163506 +0000 UTC m=+37.759874712" observedRunningTime="2026-04-13 19:21:03.966534442 +0000 UTC m=+40.530245608" watchObservedRunningTime="2026-04-13 19:21:03.968490178 +0000 UTC m=+40.532201344" Apr 13 19:21:04.097714 containerd[1489]: 2026-04-13 19:21:04.013 [INFO][3868] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="d44bc0200b39031551c9872907c7f685d1954a93f2a8209c63b99c5269887c94" Apr 13 19:21:04.097714 containerd[1489]: 2026-04-13 19:21:04.014 [INFO][3868] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="d44bc0200b39031551c9872907c7f685d1954a93f2a8209c63b99c5269887c94" iface="eth0" netns="/var/run/netns/cni-e779c694-0465-6d46-566a-4abc69094677" Apr 13 19:21:04.097714 containerd[1489]: 2026-04-13 19:21:04.023 [INFO][3868] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="d44bc0200b39031551c9872907c7f685d1954a93f2a8209c63b99c5269887c94" iface="eth0" netns="/var/run/netns/cni-e779c694-0465-6d46-566a-4abc69094677" Apr 13 19:21:04.097714 containerd[1489]: 2026-04-13 19:21:04.026 [INFO][3868] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="d44bc0200b39031551c9872907c7f685d1954a93f2a8209c63b99c5269887c94" iface="eth0" netns="/var/run/netns/cni-e779c694-0465-6d46-566a-4abc69094677" Apr 13 19:21:04.097714 containerd[1489]: 2026-04-13 19:21:04.026 [INFO][3868] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="d44bc0200b39031551c9872907c7f685d1954a93f2a8209c63b99c5269887c94" Apr 13 19:21:04.097714 containerd[1489]: 2026-04-13 19:21:04.026 [INFO][3868] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="d44bc0200b39031551c9872907c7f685d1954a93f2a8209c63b99c5269887c94" Apr 13 19:21:04.097714 containerd[1489]: 2026-04-13 19:21:04.075 [INFO][3890] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="d44bc0200b39031551c9872907c7f685d1954a93f2a8209c63b99c5269887c94" HandleID="k8s-pod-network.d44bc0200b39031551c9872907c7f685d1954a93f2a8209c63b99c5269887c94" Workload="ci--4081--3--7--b--7ea64c4796-k8s-whisker--5f5f6bcd87--76n4z-eth0" Apr 13 19:21:04.097714 containerd[1489]: 2026-04-13 19:21:04.075 [INFO][3890] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 19:21:04.097714 containerd[1489]: 2026-04-13 19:21:04.075 [INFO][3890] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 19:21:04.097714 containerd[1489]: 2026-04-13 19:21:04.086 [WARNING][3890] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="d44bc0200b39031551c9872907c7f685d1954a93f2a8209c63b99c5269887c94" HandleID="k8s-pod-network.d44bc0200b39031551c9872907c7f685d1954a93f2a8209c63b99c5269887c94" Workload="ci--4081--3--7--b--7ea64c4796-k8s-whisker--5f5f6bcd87--76n4z-eth0" Apr 13 19:21:04.097714 containerd[1489]: 2026-04-13 19:21:04.086 [INFO][3890] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="d44bc0200b39031551c9872907c7f685d1954a93f2a8209c63b99c5269887c94" HandleID="k8s-pod-network.d44bc0200b39031551c9872907c7f685d1954a93f2a8209c63b99c5269887c94" Workload="ci--4081--3--7--b--7ea64c4796-k8s-whisker--5f5f6bcd87--76n4z-eth0" Apr 13 19:21:04.097714 containerd[1489]: 2026-04-13 19:21:04.089 [INFO][3890] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 19:21:04.097714 containerd[1489]: 2026-04-13 19:21:04.093 [INFO][3868] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="d44bc0200b39031551c9872907c7f685d1954a93f2a8209c63b99c5269887c94" Apr 13 19:21:04.099209 containerd[1489]: time="2026-04-13T19:21:04.098090406Z" level=info msg="TearDown network for sandbox \"d44bc0200b39031551c9872907c7f685d1954a93f2a8209c63b99c5269887c94\" successfully" Apr 13 19:21:04.099209 containerd[1489]: time="2026-04-13T19:21:04.098137496Z" level=info msg="StopPodSandbox for \"d44bc0200b39031551c9872907c7f685d1954a93f2a8209c63b99c5269887c94\" returns successfully" Apr 13 19:21:04.100352 systemd[1]: run-netns-cni\x2de779c694\x2d0465\x2d6d46\x2d566a\x2d4abc69094677.mount: Deactivated successfully. Apr 13 19:21:04.209535 kubelet[2589]: I0413 19:21:04.208877 2589 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/3626b64e-cdfe-4c53-adb8-ae7098498212-nginx-config\") pod \"3626b64e-cdfe-4c53-adb8-ae7098498212\" (UID: \"3626b64e-cdfe-4c53-adb8-ae7098498212\") " Apr 13 19:21:04.209535 kubelet[2589]: I0413 19:21:04.208948 2589 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/3626b64e-cdfe-4c53-adb8-ae7098498212-whisker-backend-key-pair\") pod \"3626b64e-cdfe-4c53-adb8-ae7098498212\" (UID: \"3626b64e-cdfe-4c53-adb8-ae7098498212\") " Apr 13 19:21:04.209535 kubelet[2589]: I0413 19:21:04.209223 2589 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3626b64e-cdfe-4c53-adb8-ae7098498212-whisker-ca-bundle\") pod \"3626b64e-cdfe-4c53-adb8-ae7098498212\" (UID: \"3626b64e-cdfe-4c53-adb8-ae7098498212\") " Apr 13 19:21:04.209535 kubelet[2589]: I0413 19:21:04.209288 2589 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nsgcx\" (UniqueName: \"kubernetes.io/projected/3626b64e-cdfe-4c53-adb8-ae7098498212-kube-api-access-nsgcx\") pod \"3626b64e-cdfe-4c53-adb8-ae7098498212\" (UID: \"3626b64e-cdfe-4c53-adb8-ae7098498212\") " Apr 13 19:21:04.218643 kubelet[2589]: I0413 19:21:04.213732 2589 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3626b64e-cdfe-4c53-adb8-ae7098498212-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "3626b64e-cdfe-4c53-adb8-ae7098498212" (UID: "3626b64e-cdfe-4c53-adb8-ae7098498212"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 13 19:21:04.218643 kubelet[2589]: I0413 19:21:04.218324 2589 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3626b64e-cdfe-4c53-adb8-ae7098498212-nginx-config" (OuterVolumeSpecName: "nginx-config") pod "3626b64e-cdfe-4c53-adb8-ae7098498212" (UID: "3626b64e-cdfe-4c53-adb8-ae7098498212"). InnerVolumeSpecName "nginx-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 13 19:21:04.221490 kubelet[2589]: I0413 19:21:04.219457 2589 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3626b64e-cdfe-4c53-adb8-ae7098498212-kube-api-access-nsgcx" (OuterVolumeSpecName: "kube-api-access-nsgcx") pod "3626b64e-cdfe-4c53-adb8-ae7098498212" (UID: "3626b64e-cdfe-4c53-adb8-ae7098498212"). InnerVolumeSpecName "kube-api-access-nsgcx". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 13 19:21:04.221111 systemd[1]: var-lib-kubelet-pods-3626b64e\x2dcdfe\x2d4c53\x2dadb8\x2dae7098498212-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dnsgcx.mount: Deactivated successfully. Apr 13 19:21:04.224166 kubelet[2589]: I0413 19:21:04.224084 2589 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3626b64e-cdfe-4c53-adb8-ae7098498212-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "3626b64e-cdfe-4c53-adb8-ae7098498212" (UID: "3626b64e-cdfe-4c53-adb8-ae7098498212"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 13 19:21:04.226555 systemd[1]: var-lib-kubelet-pods-3626b64e\x2dcdfe\x2d4c53\x2dadb8\x2dae7098498212-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Apr 13 19:21:04.310356 kubelet[2589]: I0413 19:21:04.310204 2589 reconciler_common.go:299] "Volume detached for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/3626b64e-cdfe-4c53-adb8-ae7098498212-nginx-config\") on node \"ci-4081-3-7-b-7ea64c4796\" DevicePath \"\"" Apr 13 19:21:04.310356 kubelet[2589]: I0413 19:21:04.310247 2589 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/3626b64e-cdfe-4c53-adb8-ae7098498212-whisker-backend-key-pair\") on node \"ci-4081-3-7-b-7ea64c4796\" DevicePath \"\"" Apr 13 19:21:04.310356 kubelet[2589]: I0413 19:21:04.310261 2589 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3626b64e-cdfe-4c53-adb8-ae7098498212-whisker-ca-bundle\") on node \"ci-4081-3-7-b-7ea64c4796\" DevicePath \"\"" Apr 13 19:21:04.310356 kubelet[2589]: I0413 19:21:04.310270 2589 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-nsgcx\" (UniqueName: \"kubernetes.io/projected/3626b64e-cdfe-4c53-adb8-ae7098498212-kube-api-access-nsgcx\") on node \"ci-4081-3-7-b-7ea64c4796\" DevicePath \"\"" Apr 13 19:21:04.928565 systemd[1]: Removed slice kubepods-besteffort-pod3626b64e_cdfe_4c53_adb8_ae7098498212.slice - libcontainer container kubepods-besteffort-pod3626b64e_cdfe_4c53_adb8_ae7098498212.slice. Apr 13 19:21:05.023967 systemd[1]: Created slice kubepods-besteffort-poda3e154f3_d885_4c3c_975e_b3a8d964830e.slice - libcontainer container kubepods-besteffort-poda3e154f3_d885_4c3c_975e_b3a8d964830e.slice. Apr 13 19:21:05.117761 kubelet[2589]: I0413 19:21:05.117713 2589 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nshvx\" (UniqueName: \"kubernetes.io/projected/a3e154f3-d885-4c3c-975e-b3a8d964830e-kube-api-access-nshvx\") pod \"whisker-7ddb9fc6-jcvng\" (UID: \"a3e154f3-d885-4c3c-975e-b3a8d964830e\") " pod="calico-system/whisker-7ddb9fc6-jcvng" Apr 13 19:21:05.117761 kubelet[2589]: I0413 19:21:05.117771 2589 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/a3e154f3-d885-4c3c-975e-b3a8d964830e-nginx-config\") pod \"whisker-7ddb9fc6-jcvng\" (UID: \"a3e154f3-d885-4c3c-975e-b3a8d964830e\") " pod="calico-system/whisker-7ddb9fc6-jcvng" Apr 13 19:21:05.117966 kubelet[2589]: I0413 19:21:05.117796 2589 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/a3e154f3-d885-4c3c-975e-b3a8d964830e-whisker-backend-key-pair\") pod \"whisker-7ddb9fc6-jcvng\" (UID: \"a3e154f3-d885-4c3c-975e-b3a8d964830e\") " pod="calico-system/whisker-7ddb9fc6-jcvng" Apr 13 19:21:05.117966 kubelet[2589]: I0413 19:21:05.117814 2589 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a3e154f3-d885-4c3c-975e-b3a8d964830e-whisker-ca-bundle\") pod \"whisker-7ddb9fc6-jcvng\" (UID: \"a3e154f3-d885-4c3c-975e-b3a8d964830e\") " pod="calico-system/whisker-7ddb9fc6-jcvng" Apr 13 19:21:05.335454 containerd[1489]: time="2026-04-13T19:21:05.335284404Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7ddb9fc6-jcvng,Uid:a3e154f3-d885-4c3c-975e-b3a8d964830e,Namespace:calico-system,Attempt:0,}" Apr 13 19:21:05.512044 systemd-networkd[1382]: calie2471eae6e7: Link UP Apr 13 19:21:05.512293 systemd-networkd[1382]: calie2471eae6e7: Gained carrier Apr 13 19:21:05.534407 containerd[1489]: 2026-04-13 19:21:05.382 [ERROR][4023] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 13 19:21:05.534407 containerd[1489]: 2026-04-13 19:21:05.400 [INFO][4023] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--7--b--7ea64c4796-k8s-whisker--7ddb9fc6--jcvng-eth0 whisker-7ddb9fc6- calico-system a3e154f3-d885-4c3c-975e-b3a8d964830e 889 0 2026-04-13 19:21:04 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:7ddb9fc6 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4081-3-7-b-7ea64c4796 whisker-7ddb9fc6-jcvng eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calie2471eae6e7 [] [] }} ContainerID="908f5a12c978d739d2863c89766babc82f529d759ad7845fcb9c5a6f1f1b4163" Namespace="calico-system" Pod="whisker-7ddb9fc6-jcvng" WorkloadEndpoint="ci--4081--3--7--b--7ea64c4796-k8s-whisker--7ddb9fc6--jcvng-" Apr 13 19:21:05.534407 containerd[1489]: 2026-04-13 19:21:05.400 [INFO][4023] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="908f5a12c978d739d2863c89766babc82f529d759ad7845fcb9c5a6f1f1b4163" Namespace="calico-system" Pod="whisker-7ddb9fc6-jcvng" WorkloadEndpoint="ci--4081--3--7--b--7ea64c4796-k8s-whisker--7ddb9fc6--jcvng-eth0" Apr 13 19:21:05.534407 containerd[1489]: 2026-04-13 19:21:05.432 [INFO][4034] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="908f5a12c978d739d2863c89766babc82f529d759ad7845fcb9c5a6f1f1b4163" HandleID="k8s-pod-network.908f5a12c978d739d2863c89766babc82f529d759ad7845fcb9c5a6f1f1b4163" Workload="ci--4081--3--7--b--7ea64c4796-k8s-whisker--7ddb9fc6--jcvng-eth0" Apr 13 19:21:05.534407 containerd[1489]: 2026-04-13 19:21:05.445 [INFO][4034] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="908f5a12c978d739d2863c89766babc82f529d759ad7845fcb9c5a6f1f1b4163" HandleID="k8s-pod-network.908f5a12c978d739d2863c89766babc82f529d759ad7845fcb9c5a6f1f1b4163" Workload="ci--4081--3--7--b--7ea64c4796-k8s-whisker--7ddb9fc6--jcvng-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002733d0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-7-b-7ea64c4796", "pod":"whisker-7ddb9fc6-jcvng", "timestamp":"2026-04-13 19:21:05.432692359 +0000 UTC"}, Hostname:"ci-4081-3-7-b-7ea64c4796", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40002531e0)} Apr 13 19:21:05.534407 containerd[1489]: 2026-04-13 19:21:05.446 [INFO][4034] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 19:21:05.534407 containerd[1489]: 2026-04-13 19:21:05.446 [INFO][4034] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 19:21:05.534407 containerd[1489]: 2026-04-13 19:21:05.446 [INFO][4034] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-7-b-7ea64c4796' Apr 13 19:21:05.534407 containerd[1489]: 2026-04-13 19:21:05.450 [INFO][4034] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.908f5a12c978d739d2863c89766babc82f529d759ad7845fcb9c5a6f1f1b4163" host="ci-4081-3-7-b-7ea64c4796" Apr 13 19:21:05.534407 containerd[1489]: 2026-04-13 19:21:05.458 [INFO][4034] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081-3-7-b-7ea64c4796" Apr 13 19:21:05.534407 containerd[1489]: 2026-04-13 19:21:05.464 [INFO][4034] ipam/ipam.go 526: Trying affinity for 192.168.62.128/26 host="ci-4081-3-7-b-7ea64c4796" Apr 13 19:21:05.534407 containerd[1489]: 2026-04-13 19:21:05.467 [INFO][4034] ipam/ipam.go 160: Attempting to load block cidr=192.168.62.128/26 host="ci-4081-3-7-b-7ea64c4796" Apr 13 19:21:05.534407 containerd[1489]: 2026-04-13 19:21:05.471 [INFO][4034] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.62.128/26 host="ci-4081-3-7-b-7ea64c4796" Apr 13 19:21:05.534407 containerd[1489]: 2026-04-13 19:21:05.471 [INFO][4034] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.62.128/26 handle="k8s-pod-network.908f5a12c978d739d2863c89766babc82f529d759ad7845fcb9c5a6f1f1b4163" host="ci-4081-3-7-b-7ea64c4796" Apr 13 19:21:05.534407 containerd[1489]: 2026-04-13 19:21:05.474 [INFO][4034] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.908f5a12c978d739d2863c89766babc82f529d759ad7845fcb9c5a6f1f1b4163 Apr 13 19:21:05.534407 containerd[1489]: 2026-04-13 19:21:05.480 [INFO][4034] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.62.128/26 handle="k8s-pod-network.908f5a12c978d739d2863c89766babc82f529d759ad7845fcb9c5a6f1f1b4163" host="ci-4081-3-7-b-7ea64c4796" Apr 13 19:21:05.534407 containerd[1489]: 2026-04-13 19:21:05.493 [INFO][4034] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.62.129/26] block=192.168.62.128/26 handle="k8s-pod-network.908f5a12c978d739d2863c89766babc82f529d759ad7845fcb9c5a6f1f1b4163" host="ci-4081-3-7-b-7ea64c4796" Apr 13 19:21:05.534407 containerd[1489]: 2026-04-13 19:21:05.493 [INFO][4034] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.62.129/26] handle="k8s-pod-network.908f5a12c978d739d2863c89766babc82f529d759ad7845fcb9c5a6f1f1b4163" host="ci-4081-3-7-b-7ea64c4796" Apr 13 19:21:05.534407 containerd[1489]: 2026-04-13 19:21:05.494 [INFO][4034] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 19:21:05.534407 containerd[1489]: 2026-04-13 19:21:05.494 [INFO][4034] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.62.129/26] IPv6=[] ContainerID="908f5a12c978d739d2863c89766babc82f529d759ad7845fcb9c5a6f1f1b4163" HandleID="k8s-pod-network.908f5a12c978d739d2863c89766babc82f529d759ad7845fcb9c5a6f1f1b4163" Workload="ci--4081--3--7--b--7ea64c4796-k8s-whisker--7ddb9fc6--jcvng-eth0" Apr 13 19:21:05.537492 containerd[1489]: 2026-04-13 19:21:05.497 [INFO][4023] cni-plugin/k8s.go 418: Populated endpoint ContainerID="908f5a12c978d739d2863c89766babc82f529d759ad7845fcb9c5a6f1f1b4163" Namespace="calico-system" Pod="whisker-7ddb9fc6-jcvng" WorkloadEndpoint="ci--4081--3--7--b--7ea64c4796-k8s-whisker--7ddb9fc6--jcvng-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--b--7ea64c4796-k8s-whisker--7ddb9fc6--jcvng-eth0", GenerateName:"whisker-7ddb9fc6-", Namespace:"calico-system", SelfLink:"", UID:"a3e154f3-d885-4c3c-975e-b3a8d964830e", ResourceVersion:"889", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 19, 21, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7ddb9fc6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-b-7ea64c4796", ContainerID:"", Pod:"whisker-7ddb9fc6-jcvng", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.62.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calie2471eae6e7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 19:21:05.537492 containerd[1489]: 2026-04-13 19:21:05.499 [INFO][4023] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.62.129/32] ContainerID="908f5a12c978d739d2863c89766babc82f529d759ad7845fcb9c5a6f1f1b4163" Namespace="calico-system" Pod="whisker-7ddb9fc6-jcvng" WorkloadEndpoint="ci--4081--3--7--b--7ea64c4796-k8s-whisker--7ddb9fc6--jcvng-eth0" Apr 13 19:21:05.537492 containerd[1489]: 2026-04-13 19:21:05.499 [INFO][4023] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie2471eae6e7 ContainerID="908f5a12c978d739d2863c89766babc82f529d759ad7845fcb9c5a6f1f1b4163" Namespace="calico-system" Pod="whisker-7ddb9fc6-jcvng" WorkloadEndpoint="ci--4081--3--7--b--7ea64c4796-k8s-whisker--7ddb9fc6--jcvng-eth0" Apr 13 19:21:05.537492 containerd[1489]: 2026-04-13 19:21:05.510 [INFO][4023] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="908f5a12c978d739d2863c89766babc82f529d759ad7845fcb9c5a6f1f1b4163" Namespace="calico-system" Pod="whisker-7ddb9fc6-jcvng" WorkloadEndpoint="ci--4081--3--7--b--7ea64c4796-k8s-whisker--7ddb9fc6--jcvng-eth0" Apr 13 19:21:05.537492 containerd[1489]: 2026-04-13 19:21:05.513 [INFO][4023] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="908f5a12c978d739d2863c89766babc82f529d759ad7845fcb9c5a6f1f1b4163" Namespace="calico-system" Pod="whisker-7ddb9fc6-jcvng" WorkloadEndpoint="ci--4081--3--7--b--7ea64c4796-k8s-whisker--7ddb9fc6--jcvng-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--b--7ea64c4796-k8s-whisker--7ddb9fc6--jcvng-eth0", GenerateName:"whisker-7ddb9fc6-", Namespace:"calico-system", SelfLink:"", UID:"a3e154f3-d885-4c3c-975e-b3a8d964830e", ResourceVersion:"889", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 19, 21, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7ddb9fc6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-b-7ea64c4796", ContainerID:"908f5a12c978d739d2863c89766babc82f529d759ad7845fcb9c5a6f1f1b4163", Pod:"whisker-7ddb9fc6-jcvng", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.62.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calie2471eae6e7", MAC:"5a:c4:3e:7b:6f:b9", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 19:21:05.537492 containerd[1489]: 2026-04-13 19:21:05.529 [INFO][4023] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="908f5a12c978d739d2863c89766babc82f529d759ad7845fcb9c5a6f1f1b4163" Namespace="calico-system" Pod="whisker-7ddb9fc6-jcvng" WorkloadEndpoint="ci--4081--3--7--b--7ea64c4796-k8s-whisker--7ddb9fc6--jcvng-eth0" Apr 13 19:21:05.557939 containerd[1489]: time="2026-04-13T19:21:05.557332039Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 19:21:05.557939 containerd[1489]: time="2026-04-13T19:21:05.557736120Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 19:21:05.557939 containerd[1489]: time="2026-04-13T19:21:05.557757724Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:21:05.557939 containerd[1489]: time="2026-04-13T19:21:05.557861265Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:21:05.588338 systemd[1]: Started cri-containerd-908f5a12c978d739d2863c89766babc82f529d759ad7845fcb9c5a6f1f1b4163.scope - libcontainer container 908f5a12c978d739d2863c89766babc82f529d759ad7845fcb9c5a6f1f1b4163. Apr 13 19:21:05.595790 kubelet[2589]: I0413 19:21:05.595729 2589 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3626b64e-cdfe-4c53-adb8-ae7098498212" path="/var/lib/kubelet/pods/3626b64e-cdfe-4c53-adb8-ae7098498212/volumes" Apr 13 19:21:05.634216 containerd[1489]: time="2026-04-13T19:21:05.634114071Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7ddb9fc6-jcvng,Uid:a3e154f3-d885-4c3c-975e-b3a8d964830e,Namespace:calico-system,Attempt:0,} returns sandbox id \"908f5a12c978d739d2863c89766babc82f529d759ad7845fcb9c5a6f1f1b4163\"" Apr 13 19:21:05.637258 containerd[1489]: time="2026-04-13T19:21:05.636970562Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\"" Apr 13 19:21:06.651233 systemd-networkd[1382]: calie2471eae6e7: Gained IPv6LL Apr 13 19:21:07.414962 containerd[1489]: time="2026-04-13T19:21:07.413860468Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:21:07.414962 containerd[1489]: time="2026-04-13T19:21:07.414903025Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.31.4: active requests=0, bytes read=5882804" Apr 13 19:21:07.417275 containerd[1489]: time="2026-04-13T19:21:07.417198978Z" level=info msg="ImageCreate event name:\"sha256:51af4e9dcdb93e51b26a4a6f99272ec2df8de1aef256bb746f2c7c844b8e7b2c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:21:07.421756 containerd[1489]: time="2026-04-13T19:21:07.421383249Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:21:07.422386 containerd[1489]: time="2026-04-13T19:21:07.422343150Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.31.4\" with image id \"sha256:51af4e9dcdb93e51b26a4a6f99272ec2df8de1aef256bb746f2c7c844b8e7b2c\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\", size \"7280321\" in 1.78512622s" Apr 13 19:21:07.422386 containerd[1489]: time="2026-04-13T19:21:07.422383198Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\" returns image reference \"sha256:51af4e9dcdb93e51b26a4a6f99272ec2df8de1aef256bb746f2c7c844b8e7b2c\"" Apr 13 19:21:07.430731 containerd[1489]: time="2026-04-13T19:21:07.430688047Z" level=info msg="CreateContainer within sandbox \"908f5a12c978d739d2863c89766babc82f529d759ad7845fcb9c5a6f1f1b4163\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Apr 13 19:21:07.453575 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount466896167.mount: Deactivated successfully. Apr 13 19:21:07.462007 containerd[1489]: time="2026-04-13T19:21:07.461918547Z" level=info msg="CreateContainer within sandbox \"908f5a12c978d739d2863c89766babc82f529d759ad7845fcb9c5a6f1f1b4163\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"192d2ce32a1cf6cef37f436f8a1ccbaa838a7f9928149f2b12d16319a1af8c41\"" Apr 13 19:21:07.464354 containerd[1489]: time="2026-04-13T19:21:07.464134885Z" level=info msg="StartContainer for \"192d2ce32a1cf6cef37f436f8a1ccbaa838a7f9928149f2b12d16319a1af8c41\"" Apr 13 19:21:07.507468 systemd[1]: Started cri-containerd-192d2ce32a1cf6cef37f436f8a1ccbaa838a7f9928149f2b12d16319a1af8c41.scope - libcontainer container 192d2ce32a1cf6cef37f436f8a1ccbaa838a7f9928149f2b12d16319a1af8c41. Apr 13 19:21:07.554008 containerd[1489]: time="2026-04-13T19:21:07.553930249Z" level=info msg="StartContainer for \"192d2ce32a1cf6cef37f436f8a1ccbaa838a7f9928149f2b12d16319a1af8c41\" returns successfully" Apr 13 19:21:07.556934 containerd[1489]: time="2026-04-13T19:21:07.556892209Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\"" Apr 13 19:21:08.390363 kubelet[2589]: I0413 19:21:08.389759 2589 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 13 19:21:09.803070 kernel: calico-node[4208]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Apr 13 19:21:09.826963 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2274958574.mount: Deactivated successfully. Apr 13 19:21:09.866521 containerd[1489]: time="2026-04-13T19:21:09.866457182Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:21:09.871453 containerd[1489]: time="2026-04-13T19:21:09.871397708Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.31.4: active requests=0, bytes read=16426594" Apr 13 19:21:09.872693 containerd[1489]: time="2026-04-13T19:21:09.872636970Z" level=info msg="ImageCreate event name:\"sha256:19fab8e13a4d97732973f299576e43f89b889ceff6e3768f711f30e6ace1c662\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:21:09.881796 containerd[1489]: time="2026-04-13T19:21:09.881716997Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:21:09.884552 containerd[1489]: time="2026-04-13T19:21:09.884483773Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" with image id \"sha256:19fab8e13a4d97732973f299576e43f89b889ceff6e3768f711f30e6ace1c662\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\", size \"16426424\" in 2.32751331s" Apr 13 19:21:09.884552 containerd[1489]: time="2026-04-13T19:21:09.884540703Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" returns image reference \"sha256:19fab8e13a4d97732973f299576e43f89b889ceff6e3768f711f30e6ace1c662\"" Apr 13 19:21:09.890436 containerd[1489]: time="2026-04-13T19:21:09.890238244Z" level=info msg="CreateContainer within sandbox \"908f5a12c978d739d2863c89766babc82f529d759ad7845fcb9c5a6f1f1b4163\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Apr 13 19:21:09.917525 containerd[1489]: time="2026-04-13T19:21:09.916045990Z" level=info msg="CreateContainer within sandbox \"908f5a12c978d739d2863c89766babc82f529d759ad7845fcb9c5a6f1f1b4163\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"1a0c9a9f84df1f5dcd56d35ab4acf7e2dc65a702d5e8fafc40d8dfc42a80d8b9\"" Apr 13 19:21:09.917525 containerd[1489]: time="2026-04-13T19:21:09.916837332Z" level=info msg="StartContainer for \"1a0c9a9f84df1f5dcd56d35ab4acf7e2dc65a702d5e8fafc40d8dfc42a80d8b9\"" Apr 13 19:21:09.992673 systemd[1]: Started cri-containerd-1a0c9a9f84df1f5dcd56d35ab4acf7e2dc65a702d5e8fafc40d8dfc42a80d8b9.scope - libcontainer container 1a0c9a9f84df1f5dcd56d35ab4acf7e2dc65a702d5e8fafc40d8dfc42a80d8b9. Apr 13 19:21:10.055252 containerd[1489]: time="2026-04-13T19:21:10.054505811Z" level=info msg="StartContainer for \"1a0c9a9f84df1f5dcd56d35ab4acf7e2dc65a702d5e8fafc40d8dfc42a80d8b9\" returns successfully" Apr 13 19:21:10.316856 systemd-networkd[1382]: vxlan.calico: Link UP Apr 13 19:21:10.317100 systemd-networkd[1382]: vxlan.calico: Gained carrier Apr 13 19:21:10.427257 systemd[1]: run-containerd-runc-k8s.io-1a0c9a9f84df1f5dcd56d35ab4acf7e2dc65a702d5e8fafc40d8dfc42a80d8b9-runc.ALeD3M.mount: Deactivated successfully. Apr 13 19:21:11.515377 systemd-networkd[1382]: vxlan.calico: Gained IPv6LL Apr 13 19:21:13.591142 containerd[1489]: time="2026-04-13T19:21:13.590872267Z" level=info msg="StopPodSandbox for \"0d33cef8cb08a923763dbd2b1cc01b5badd2828378c2902e8c3c77f1000c85f0\"" Apr 13 19:21:13.664884 kubelet[2589]: I0413 19:21:13.664754 2589 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-7ddb9fc6-jcvng" podStartSLOduration=5.41565954 podStartE2EDuration="9.664731083s" podCreationTimestamp="2026-04-13 19:21:04 +0000 UTC" firstStartedPulling="2026-04-13 19:21:05.636371202 +0000 UTC m=+42.200082328" lastFinishedPulling="2026-04-13 19:21:09.885442705 +0000 UTC m=+46.449153871" observedRunningTime="2026-04-13 19:21:10.980376406 +0000 UTC m=+47.544087572" watchObservedRunningTime="2026-04-13 19:21:13.664731083 +0000 UTC m=+50.228442249" Apr 13 19:21:13.722508 containerd[1489]: 2026-04-13 19:21:13.666 [INFO][4397] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="0d33cef8cb08a923763dbd2b1cc01b5badd2828378c2902e8c3c77f1000c85f0" Apr 13 19:21:13.722508 containerd[1489]: 2026-04-13 19:21:13.666 [INFO][4397] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="0d33cef8cb08a923763dbd2b1cc01b5badd2828378c2902e8c3c77f1000c85f0" iface="eth0" netns="/var/run/netns/cni-54bc8632-d3f5-8989-1de6-6e76e47ca847" Apr 13 19:21:13.722508 containerd[1489]: 2026-04-13 19:21:13.667 [INFO][4397] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="0d33cef8cb08a923763dbd2b1cc01b5badd2828378c2902e8c3c77f1000c85f0" iface="eth0" netns="/var/run/netns/cni-54bc8632-d3f5-8989-1de6-6e76e47ca847" Apr 13 19:21:13.722508 containerd[1489]: 2026-04-13 19:21:13.668 [INFO][4397] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="0d33cef8cb08a923763dbd2b1cc01b5badd2828378c2902e8c3c77f1000c85f0" iface="eth0" netns="/var/run/netns/cni-54bc8632-d3f5-8989-1de6-6e76e47ca847" Apr 13 19:21:13.722508 containerd[1489]: 2026-04-13 19:21:13.668 [INFO][4397] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="0d33cef8cb08a923763dbd2b1cc01b5badd2828378c2902e8c3c77f1000c85f0" Apr 13 19:21:13.722508 containerd[1489]: 2026-04-13 19:21:13.668 [INFO][4397] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="0d33cef8cb08a923763dbd2b1cc01b5badd2828378c2902e8c3c77f1000c85f0" Apr 13 19:21:13.722508 containerd[1489]: 2026-04-13 19:21:13.698 [INFO][4405] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="0d33cef8cb08a923763dbd2b1cc01b5badd2828378c2902e8c3c77f1000c85f0" HandleID="k8s-pod-network.0d33cef8cb08a923763dbd2b1cc01b5badd2828378c2902e8c3c77f1000c85f0" Workload="ci--4081--3--7--b--7ea64c4796-k8s-goldmane--cccfbd5cf--tlznz-eth0" Apr 13 19:21:13.722508 containerd[1489]: 2026-04-13 19:21:13.698 [INFO][4405] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 19:21:13.722508 containerd[1489]: 2026-04-13 19:21:13.698 [INFO][4405] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 19:21:13.722508 containerd[1489]: 2026-04-13 19:21:13.715 [WARNING][4405] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="0d33cef8cb08a923763dbd2b1cc01b5badd2828378c2902e8c3c77f1000c85f0" HandleID="k8s-pod-network.0d33cef8cb08a923763dbd2b1cc01b5badd2828378c2902e8c3c77f1000c85f0" Workload="ci--4081--3--7--b--7ea64c4796-k8s-goldmane--cccfbd5cf--tlznz-eth0" Apr 13 19:21:13.722508 containerd[1489]: 2026-04-13 19:21:13.715 [INFO][4405] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="0d33cef8cb08a923763dbd2b1cc01b5badd2828378c2902e8c3c77f1000c85f0" HandleID="k8s-pod-network.0d33cef8cb08a923763dbd2b1cc01b5badd2828378c2902e8c3c77f1000c85f0" Workload="ci--4081--3--7--b--7ea64c4796-k8s-goldmane--cccfbd5cf--tlznz-eth0" Apr 13 19:21:13.722508 containerd[1489]: 2026-04-13 19:21:13.717 [INFO][4405] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 19:21:13.722508 containerd[1489]: 2026-04-13 19:21:13.720 [INFO][4397] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="0d33cef8cb08a923763dbd2b1cc01b5badd2828378c2902e8c3c77f1000c85f0" Apr 13 19:21:13.722947 containerd[1489]: time="2026-04-13T19:21:13.722730831Z" level=info msg="TearDown network for sandbox \"0d33cef8cb08a923763dbd2b1cc01b5badd2828378c2902e8c3c77f1000c85f0\" successfully" Apr 13 19:21:13.722947 containerd[1489]: time="2026-04-13T19:21:13.722766237Z" level=info msg="StopPodSandbox for \"0d33cef8cb08a923763dbd2b1cc01b5badd2828378c2902e8c3c77f1000c85f0\" returns successfully" Apr 13 19:21:13.726166 systemd[1]: run-netns-cni\x2d54bc8632\x2dd3f5\x2d8989\x2d1de6\x2d6e76e47ca847.mount: Deactivated successfully. Apr 13 19:21:13.729661 containerd[1489]: time="2026-04-13T19:21:13.729604753Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-cccfbd5cf-tlznz,Uid:d1b93d06-471c-488c-b357-caa189e1a4ca,Namespace:calico-system,Attempt:1,}" Apr 13 19:21:13.899186 systemd-networkd[1382]: cali3d0b1d83e92: Link UP Apr 13 19:21:13.900690 systemd-networkd[1382]: cali3d0b1d83e92: Gained carrier Apr 13 19:21:13.924848 containerd[1489]: 2026-04-13 19:21:13.788 [INFO][4413] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--7--b--7ea64c4796-k8s-goldmane--cccfbd5cf--tlznz-eth0 goldmane-cccfbd5cf- calico-system d1b93d06-471c-488c-b357-caa189e1a4ca 936 0 2026-04-13 19:20:42 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:cccfbd5cf projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4081-3-7-b-7ea64c4796 goldmane-cccfbd5cf-tlznz eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali3d0b1d83e92 [] [] }} ContainerID="b012b6b71737554071412c482654841ca23d7e39e3567b38451653b33a616bac" Namespace="calico-system" Pod="goldmane-cccfbd5cf-tlznz" WorkloadEndpoint="ci--4081--3--7--b--7ea64c4796-k8s-goldmane--cccfbd5cf--tlznz-" Apr 13 19:21:13.924848 containerd[1489]: 2026-04-13 19:21:13.788 [INFO][4413] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b012b6b71737554071412c482654841ca23d7e39e3567b38451653b33a616bac" Namespace="calico-system" Pod="goldmane-cccfbd5cf-tlznz" WorkloadEndpoint="ci--4081--3--7--b--7ea64c4796-k8s-goldmane--cccfbd5cf--tlznz-eth0" Apr 13 19:21:13.924848 containerd[1489]: 2026-04-13 19:21:13.821 [INFO][4425] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b012b6b71737554071412c482654841ca23d7e39e3567b38451653b33a616bac" HandleID="k8s-pod-network.b012b6b71737554071412c482654841ca23d7e39e3567b38451653b33a616bac" Workload="ci--4081--3--7--b--7ea64c4796-k8s-goldmane--cccfbd5cf--tlznz-eth0" Apr 13 19:21:13.924848 containerd[1489]: 2026-04-13 19:21:13.844 [INFO][4425] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="b012b6b71737554071412c482654841ca23d7e39e3567b38451653b33a616bac" HandleID="k8s-pod-network.b012b6b71737554071412c482654841ca23d7e39e3567b38451653b33a616bac" Workload="ci--4081--3--7--b--7ea64c4796-k8s-goldmane--cccfbd5cf--tlznz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002736a0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-7-b-7ea64c4796", "pod":"goldmane-cccfbd5cf-tlznz", "timestamp":"2026-04-13 19:21:13.821243391 +0000 UTC"}, Hostname:"ci-4081-3-7-b-7ea64c4796", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x4000313760)} Apr 13 19:21:13.924848 containerd[1489]: 2026-04-13 19:21:13.844 [INFO][4425] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 19:21:13.924848 containerd[1489]: 2026-04-13 19:21:13.844 [INFO][4425] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 19:21:13.924848 containerd[1489]: 2026-04-13 19:21:13.844 [INFO][4425] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-7-b-7ea64c4796' Apr 13 19:21:13.924848 containerd[1489]: 2026-04-13 19:21:13.851 [INFO][4425] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.b012b6b71737554071412c482654841ca23d7e39e3567b38451653b33a616bac" host="ci-4081-3-7-b-7ea64c4796" Apr 13 19:21:13.924848 containerd[1489]: 2026-04-13 19:21:13.858 [INFO][4425] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081-3-7-b-7ea64c4796" Apr 13 19:21:13.924848 containerd[1489]: 2026-04-13 19:21:13.866 [INFO][4425] ipam/ipam.go 526: Trying affinity for 192.168.62.128/26 host="ci-4081-3-7-b-7ea64c4796" Apr 13 19:21:13.924848 containerd[1489]: 2026-04-13 19:21:13.869 [INFO][4425] ipam/ipam.go 160: Attempting to load block cidr=192.168.62.128/26 host="ci-4081-3-7-b-7ea64c4796" Apr 13 19:21:13.924848 containerd[1489]: 2026-04-13 19:21:13.872 [INFO][4425] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.62.128/26 host="ci-4081-3-7-b-7ea64c4796" Apr 13 19:21:13.924848 containerd[1489]: 2026-04-13 19:21:13.872 [INFO][4425] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.62.128/26 handle="k8s-pod-network.b012b6b71737554071412c482654841ca23d7e39e3567b38451653b33a616bac" host="ci-4081-3-7-b-7ea64c4796" Apr 13 19:21:13.924848 containerd[1489]: 2026-04-13 19:21:13.874 [INFO][4425] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.b012b6b71737554071412c482654841ca23d7e39e3567b38451653b33a616bac Apr 13 19:21:13.924848 containerd[1489]: 2026-04-13 19:21:13.879 [INFO][4425] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.62.128/26 handle="k8s-pod-network.b012b6b71737554071412c482654841ca23d7e39e3567b38451653b33a616bac" host="ci-4081-3-7-b-7ea64c4796" Apr 13 19:21:13.924848 containerd[1489]: 2026-04-13 19:21:13.888 [INFO][4425] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.62.130/26] block=192.168.62.128/26 handle="k8s-pod-network.b012b6b71737554071412c482654841ca23d7e39e3567b38451653b33a616bac" host="ci-4081-3-7-b-7ea64c4796" Apr 13 19:21:13.924848 containerd[1489]: 2026-04-13 19:21:13.888 [INFO][4425] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.62.130/26] handle="k8s-pod-network.b012b6b71737554071412c482654841ca23d7e39e3567b38451653b33a616bac" host="ci-4081-3-7-b-7ea64c4796" Apr 13 19:21:13.924848 containerd[1489]: 2026-04-13 19:21:13.888 [INFO][4425] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 19:21:13.924848 containerd[1489]: 2026-04-13 19:21:13.888 [INFO][4425] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.62.130/26] IPv6=[] ContainerID="b012b6b71737554071412c482654841ca23d7e39e3567b38451653b33a616bac" HandleID="k8s-pod-network.b012b6b71737554071412c482654841ca23d7e39e3567b38451653b33a616bac" Workload="ci--4081--3--7--b--7ea64c4796-k8s-goldmane--cccfbd5cf--tlznz-eth0" Apr 13 19:21:13.926725 containerd[1489]: 2026-04-13 19:21:13.891 [INFO][4413] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b012b6b71737554071412c482654841ca23d7e39e3567b38451653b33a616bac" Namespace="calico-system" Pod="goldmane-cccfbd5cf-tlznz" WorkloadEndpoint="ci--4081--3--7--b--7ea64c4796-k8s-goldmane--cccfbd5cf--tlznz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--b--7ea64c4796-k8s-goldmane--cccfbd5cf--tlznz-eth0", GenerateName:"goldmane-cccfbd5cf-", Namespace:"calico-system", SelfLink:"", UID:"d1b93d06-471c-488c-b357-caa189e1a4ca", ResourceVersion:"936", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 19, 20, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"cccfbd5cf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-b-7ea64c4796", ContainerID:"", Pod:"goldmane-cccfbd5cf-tlznz", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.62.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali3d0b1d83e92", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 19:21:13.926725 containerd[1489]: 2026-04-13 19:21:13.892 [INFO][4413] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.62.130/32] ContainerID="b012b6b71737554071412c482654841ca23d7e39e3567b38451653b33a616bac" Namespace="calico-system" Pod="goldmane-cccfbd5cf-tlznz" WorkloadEndpoint="ci--4081--3--7--b--7ea64c4796-k8s-goldmane--cccfbd5cf--tlznz-eth0" Apr 13 19:21:13.926725 containerd[1489]: 2026-04-13 19:21:13.892 [INFO][4413] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3d0b1d83e92 ContainerID="b012b6b71737554071412c482654841ca23d7e39e3567b38451653b33a616bac" Namespace="calico-system" Pod="goldmane-cccfbd5cf-tlznz" WorkloadEndpoint="ci--4081--3--7--b--7ea64c4796-k8s-goldmane--cccfbd5cf--tlznz-eth0" Apr 13 19:21:13.926725 containerd[1489]: 2026-04-13 19:21:13.900 [INFO][4413] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b012b6b71737554071412c482654841ca23d7e39e3567b38451653b33a616bac" Namespace="calico-system" Pod="goldmane-cccfbd5cf-tlznz" WorkloadEndpoint="ci--4081--3--7--b--7ea64c4796-k8s-goldmane--cccfbd5cf--tlznz-eth0" Apr 13 19:21:13.926725 containerd[1489]: 2026-04-13 19:21:13.901 [INFO][4413] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b012b6b71737554071412c482654841ca23d7e39e3567b38451653b33a616bac" Namespace="calico-system" Pod="goldmane-cccfbd5cf-tlznz" WorkloadEndpoint="ci--4081--3--7--b--7ea64c4796-k8s-goldmane--cccfbd5cf--tlznz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--b--7ea64c4796-k8s-goldmane--cccfbd5cf--tlznz-eth0", GenerateName:"goldmane-cccfbd5cf-", Namespace:"calico-system", SelfLink:"", UID:"d1b93d06-471c-488c-b357-caa189e1a4ca", ResourceVersion:"936", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 19, 20, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"cccfbd5cf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-b-7ea64c4796", ContainerID:"b012b6b71737554071412c482654841ca23d7e39e3567b38451653b33a616bac", Pod:"goldmane-cccfbd5cf-tlznz", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.62.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali3d0b1d83e92", MAC:"7a:60:59:a3:8c:cd", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 19:21:13.926725 containerd[1489]: 2026-04-13 19:21:13.920 [INFO][4413] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b012b6b71737554071412c482654841ca23d7e39e3567b38451653b33a616bac" Namespace="calico-system" Pod="goldmane-cccfbd5cf-tlznz" WorkloadEndpoint="ci--4081--3--7--b--7ea64c4796-k8s-goldmane--cccfbd5cf--tlznz-eth0" Apr 13 19:21:13.953232 containerd[1489]: time="2026-04-13T19:21:13.952927847Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 19:21:13.953232 containerd[1489]: time="2026-04-13T19:21:13.953007820Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 19:21:13.953232 containerd[1489]: time="2026-04-13T19:21:13.953137761Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:21:13.953593 containerd[1489]: time="2026-04-13T19:21:13.953410765Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:21:13.981049 systemd[1]: run-containerd-runc-k8s.io-b012b6b71737554071412c482654841ca23d7e39e3567b38451653b33a616bac-runc.vy6Tqi.mount: Deactivated successfully. Apr 13 19:21:13.990411 systemd[1]: Started cri-containerd-b012b6b71737554071412c482654841ca23d7e39e3567b38451653b33a616bac.scope - libcontainer container b012b6b71737554071412c482654841ca23d7e39e3567b38451653b33a616bac. Apr 13 19:21:14.047473 containerd[1489]: time="2026-04-13T19:21:14.047422994Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-cccfbd5cf-tlznz,Uid:d1b93d06-471c-488c-b357-caa189e1a4ca,Namespace:calico-system,Attempt:1,} returns sandbox id \"b012b6b71737554071412c482654841ca23d7e39e3567b38451653b33a616bac\"" Apr 13 19:21:14.055925 containerd[1489]: time="2026-04-13T19:21:14.054165832Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\"" Apr 13 19:21:14.592477 containerd[1489]: time="2026-04-13T19:21:14.592409664Z" level=info msg="StopPodSandbox for \"a0bdc0b52934e5ce608ded3fcc82278b9d7df978e6cb5160774f22130884d4fa\"" Apr 13 19:21:14.733749 containerd[1489]: 2026-04-13 19:21:14.685 [INFO][4503] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="a0bdc0b52934e5ce608ded3fcc82278b9d7df978e6cb5160774f22130884d4fa" Apr 13 19:21:14.733749 containerd[1489]: 2026-04-13 19:21:14.685 [INFO][4503] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="a0bdc0b52934e5ce608ded3fcc82278b9d7df978e6cb5160774f22130884d4fa" iface="eth0" netns="/var/run/netns/cni-97ca8605-96ca-6e52-afe5-7fa1a0e3d55e" Apr 13 19:21:14.733749 containerd[1489]: 2026-04-13 19:21:14.685 [INFO][4503] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="a0bdc0b52934e5ce608ded3fcc82278b9d7df978e6cb5160774f22130884d4fa" iface="eth0" netns="/var/run/netns/cni-97ca8605-96ca-6e52-afe5-7fa1a0e3d55e" Apr 13 19:21:14.733749 containerd[1489]: 2026-04-13 19:21:14.686 [INFO][4503] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="a0bdc0b52934e5ce608ded3fcc82278b9d7df978e6cb5160774f22130884d4fa" iface="eth0" netns="/var/run/netns/cni-97ca8605-96ca-6e52-afe5-7fa1a0e3d55e" Apr 13 19:21:14.733749 containerd[1489]: 2026-04-13 19:21:14.686 [INFO][4503] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="a0bdc0b52934e5ce608ded3fcc82278b9d7df978e6cb5160774f22130884d4fa" Apr 13 19:21:14.733749 containerd[1489]: 2026-04-13 19:21:14.686 [INFO][4503] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="a0bdc0b52934e5ce608ded3fcc82278b9d7df978e6cb5160774f22130884d4fa" Apr 13 19:21:14.733749 containerd[1489]: 2026-04-13 19:21:14.716 [INFO][4510] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="a0bdc0b52934e5ce608ded3fcc82278b9d7df978e6cb5160774f22130884d4fa" HandleID="k8s-pod-network.a0bdc0b52934e5ce608ded3fcc82278b9d7df978e6cb5160774f22130884d4fa" Workload="ci--4081--3--7--b--7ea64c4796-k8s-coredns--66bc5c9577--gxbxf-eth0" Apr 13 19:21:14.733749 containerd[1489]: 2026-04-13 19:21:14.716 [INFO][4510] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 19:21:14.733749 containerd[1489]: 2026-04-13 19:21:14.716 [INFO][4510] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 19:21:14.733749 containerd[1489]: 2026-04-13 19:21:14.727 [WARNING][4510] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="a0bdc0b52934e5ce608ded3fcc82278b9d7df978e6cb5160774f22130884d4fa" HandleID="k8s-pod-network.a0bdc0b52934e5ce608ded3fcc82278b9d7df978e6cb5160774f22130884d4fa" Workload="ci--4081--3--7--b--7ea64c4796-k8s-coredns--66bc5c9577--gxbxf-eth0" Apr 13 19:21:14.733749 containerd[1489]: 2026-04-13 19:21:14.727 [INFO][4510] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="a0bdc0b52934e5ce608ded3fcc82278b9d7df978e6cb5160774f22130884d4fa" HandleID="k8s-pod-network.a0bdc0b52934e5ce608ded3fcc82278b9d7df978e6cb5160774f22130884d4fa" Workload="ci--4081--3--7--b--7ea64c4796-k8s-coredns--66bc5c9577--gxbxf-eth0" Apr 13 19:21:14.733749 containerd[1489]: 2026-04-13 19:21:14.729 [INFO][4510] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 19:21:14.733749 containerd[1489]: 2026-04-13 19:21:14.731 [INFO][4503] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="a0bdc0b52934e5ce608ded3fcc82278b9d7df978e6cb5160774f22130884d4fa" Apr 13 19:21:14.736321 containerd[1489]: time="2026-04-13T19:21:14.735468090Z" level=info msg="TearDown network for sandbox \"a0bdc0b52934e5ce608ded3fcc82278b9d7df978e6cb5160774f22130884d4fa\" successfully" Apr 13 19:21:14.736321 containerd[1489]: time="2026-04-13T19:21:14.735508017Z" level=info msg="StopPodSandbox for \"a0bdc0b52934e5ce608ded3fcc82278b9d7df978e6cb5160774f22130884d4fa\" returns successfully" Apr 13 19:21:14.738330 systemd[1]: run-netns-cni\x2d97ca8605\x2d96ca\x2d6e52\x2dafe5\x2d7fa1a0e3d55e.mount: Deactivated successfully. Apr 13 19:21:14.740345 containerd[1489]: time="2026-04-13T19:21:14.740298582Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-gxbxf,Uid:81f175d8-d90c-45ef-9f27-602d07f6d20c,Namespace:kube-system,Attempt:1,}" Apr 13 19:21:14.909187 systemd-networkd[1382]: cali9679fda3ab7: Link UP Apr 13 19:21:14.911087 systemd-networkd[1382]: cali9679fda3ab7: Gained carrier Apr 13 19:21:14.935717 containerd[1489]: 2026-04-13 19:21:14.812 [INFO][4516] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--7--b--7ea64c4796-k8s-coredns--66bc5c9577--gxbxf-eth0 coredns-66bc5c9577- kube-system 81f175d8-d90c-45ef-9f27-602d07f6d20c 944 0 2026-04-13 19:20:29 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-3-7-b-7ea64c4796 coredns-66bc5c9577-gxbxf eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali9679fda3ab7 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="552535cb6292e9e62063563fb70c997add1bd95414d6cc50e7e16fe710bff0c6" Namespace="kube-system" Pod="coredns-66bc5c9577-gxbxf" WorkloadEndpoint="ci--4081--3--7--b--7ea64c4796-k8s-coredns--66bc5c9577--gxbxf-" Apr 13 19:21:14.935717 containerd[1489]: 2026-04-13 19:21:14.812 [INFO][4516] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="552535cb6292e9e62063563fb70c997add1bd95414d6cc50e7e16fe710bff0c6" Namespace="kube-system" Pod="coredns-66bc5c9577-gxbxf" WorkloadEndpoint="ci--4081--3--7--b--7ea64c4796-k8s-coredns--66bc5c9577--gxbxf-eth0" Apr 13 19:21:14.935717 containerd[1489]: 2026-04-13 19:21:14.841 [INFO][4528] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="552535cb6292e9e62063563fb70c997add1bd95414d6cc50e7e16fe710bff0c6" HandleID="k8s-pod-network.552535cb6292e9e62063563fb70c997add1bd95414d6cc50e7e16fe710bff0c6" Workload="ci--4081--3--7--b--7ea64c4796-k8s-coredns--66bc5c9577--gxbxf-eth0" Apr 13 19:21:14.935717 containerd[1489]: 2026-04-13 19:21:14.853 [INFO][4528] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="552535cb6292e9e62063563fb70c997add1bd95414d6cc50e7e16fe710bff0c6" HandleID="k8s-pod-network.552535cb6292e9e62063563fb70c997add1bd95414d6cc50e7e16fe710bff0c6" Workload="ci--4081--3--7--b--7ea64c4796-k8s-coredns--66bc5c9577--gxbxf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000273a60), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-3-7-b-7ea64c4796", "pod":"coredns-66bc5c9577-gxbxf", "timestamp":"2026-04-13 19:21:14.841139741 +0000 UTC"}, Hostname:"ci-4081-3-7-b-7ea64c4796", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x4000310f20)} Apr 13 19:21:14.935717 containerd[1489]: 2026-04-13 19:21:14.853 [INFO][4528] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 19:21:14.935717 containerd[1489]: 2026-04-13 19:21:14.853 [INFO][4528] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 19:21:14.935717 containerd[1489]: 2026-04-13 19:21:14.853 [INFO][4528] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-7-b-7ea64c4796' Apr 13 19:21:14.935717 containerd[1489]: 2026-04-13 19:21:14.856 [INFO][4528] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.552535cb6292e9e62063563fb70c997add1bd95414d6cc50e7e16fe710bff0c6" host="ci-4081-3-7-b-7ea64c4796" Apr 13 19:21:14.935717 containerd[1489]: 2026-04-13 19:21:14.863 [INFO][4528] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081-3-7-b-7ea64c4796" Apr 13 19:21:14.935717 containerd[1489]: 2026-04-13 19:21:14.870 [INFO][4528] ipam/ipam.go 526: Trying affinity for 192.168.62.128/26 host="ci-4081-3-7-b-7ea64c4796" Apr 13 19:21:14.935717 containerd[1489]: 2026-04-13 19:21:14.873 [INFO][4528] ipam/ipam.go 160: Attempting to load block cidr=192.168.62.128/26 host="ci-4081-3-7-b-7ea64c4796" Apr 13 19:21:14.935717 containerd[1489]: 2026-04-13 19:21:14.876 [INFO][4528] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.62.128/26 host="ci-4081-3-7-b-7ea64c4796" Apr 13 19:21:14.935717 containerd[1489]: 2026-04-13 19:21:14.876 [INFO][4528] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.62.128/26 handle="k8s-pod-network.552535cb6292e9e62063563fb70c997add1bd95414d6cc50e7e16fe710bff0c6" host="ci-4081-3-7-b-7ea64c4796" Apr 13 19:21:14.935717 containerd[1489]: 2026-04-13 19:21:14.879 [INFO][4528] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.552535cb6292e9e62063563fb70c997add1bd95414d6cc50e7e16fe710bff0c6 Apr 13 19:21:14.935717 containerd[1489]: 2026-04-13 19:21:14.891 [INFO][4528] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.62.128/26 handle="k8s-pod-network.552535cb6292e9e62063563fb70c997add1bd95414d6cc50e7e16fe710bff0c6" host="ci-4081-3-7-b-7ea64c4796" Apr 13 19:21:14.935717 containerd[1489]: 2026-04-13 19:21:14.899 [INFO][4528] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.62.131/26] block=192.168.62.128/26 handle="k8s-pod-network.552535cb6292e9e62063563fb70c997add1bd95414d6cc50e7e16fe710bff0c6" host="ci-4081-3-7-b-7ea64c4796" Apr 13 19:21:14.935717 containerd[1489]: 2026-04-13 19:21:14.900 [INFO][4528] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.62.131/26] handle="k8s-pod-network.552535cb6292e9e62063563fb70c997add1bd95414d6cc50e7e16fe710bff0c6" host="ci-4081-3-7-b-7ea64c4796" Apr 13 19:21:14.935717 containerd[1489]: 2026-04-13 19:21:14.900 [INFO][4528] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 19:21:14.935717 containerd[1489]: 2026-04-13 19:21:14.900 [INFO][4528] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.62.131/26] IPv6=[] ContainerID="552535cb6292e9e62063563fb70c997add1bd95414d6cc50e7e16fe710bff0c6" HandleID="k8s-pod-network.552535cb6292e9e62063563fb70c997add1bd95414d6cc50e7e16fe710bff0c6" Workload="ci--4081--3--7--b--7ea64c4796-k8s-coredns--66bc5c9577--gxbxf-eth0" Apr 13 19:21:14.936589 containerd[1489]: 2026-04-13 19:21:14.903 [INFO][4516] cni-plugin/k8s.go 418: Populated endpoint ContainerID="552535cb6292e9e62063563fb70c997add1bd95414d6cc50e7e16fe710bff0c6" Namespace="kube-system" Pod="coredns-66bc5c9577-gxbxf" WorkloadEndpoint="ci--4081--3--7--b--7ea64c4796-k8s-coredns--66bc5c9577--gxbxf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--b--7ea64c4796-k8s-coredns--66bc5c9577--gxbxf-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"81f175d8-d90c-45ef-9f27-602d07f6d20c", ResourceVersion:"944", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 19, 20, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-b-7ea64c4796", ContainerID:"", Pod:"coredns-66bc5c9577-gxbxf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.62.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9679fda3ab7", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 19:21:14.936589 containerd[1489]: 2026-04-13 19:21:14.903 [INFO][4516] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.62.131/32] ContainerID="552535cb6292e9e62063563fb70c997add1bd95414d6cc50e7e16fe710bff0c6" Namespace="kube-system" Pod="coredns-66bc5c9577-gxbxf" WorkloadEndpoint="ci--4081--3--7--b--7ea64c4796-k8s-coredns--66bc5c9577--gxbxf-eth0" Apr 13 19:21:14.936589 containerd[1489]: 2026-04-13 19:21:14.903 [INFO][4516] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9679fda3ab7 ContainerID="552535cb6292e9e62063563fb70c997add1bd95414d6cc50e7e16fe710bff0c6" Namespace="kube-system" Pod="coredns-66bc5c9577-gxbxf" WorkloadEndpoint="ci--4081--3--7--b--7ea64c4796-k8s-coredns--66bc5c9577--gxbxf-eth0" Apr 13 19:21:14.936589 containerd[1489]: 2026-04-13 19:21:14.912 [INFO][4516] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="552535cb6292e9e62063563fb70c997add1bd95414d6cc50e7e16fe710bff0c6" Namespace="kube-system" Pod="coredns-66bc5c9577-gxbxf" WorkloadEndpoint="ci--4081--3--7--b--7ea64c4796-k8s-coredns--66bc5c9577--gxbxf-eth0" Apr 13 19:21:14.936589 containerd[1489]: 2026-04-13 19:21:14.912 [INFO][4516] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="552535cb6292e9e62063563fb70c997add1bd95414d6cc50e7e16fe710bff0c6" Namespace="kube-system" Pod="coredns-66bc5c9577-gxbxf" WorkloadEndpoint="ci--4081--3--7--b--7ea64c4796-k8s-coredns--66bc5c9577--gxbxf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--b--7ea64c4796-k8s-coredns--66bc5c9577--gxbxf-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"81f175d8-d90c-45ef-9f27-602d07f6d20c", ResourceVersion:"944", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 19, 20, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-b-7ea64c4796", ContainerID:"552535cb6292e9e62063563fb70c997add1bd95414d6cc50e7e16fe710bff0c6", Pod:"coredns-66bc5c9577-gxbxf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.62.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9679fda3ab7", MAC:"ce:86:02:5c:d5:68", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 19:21:14.937691 containerd[1489]: 2026-04-13 19:21:14.931 [INFO][4516] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="552535cb6292e9e62063563fb70c997add1bd95414d6cc50e7e16fe710bff0c6" Namespace="kube-system" Pod="coredns-66bc5c9577-gxbxf" WorkloadEndpoint="ci--4081--3--7--b--7ea64c4796-k8s-coredns--66bc5c9577--gxbxf-eth0" Apr 13 19:21:14.961595 containerd[1489]: time="2026-04-13T19:21:14.961433728Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 19:21:14.961855 containerd[1489]: time="2026-04-13T19:21:14.961532104Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 19:21:14.961855 containerd[1489]: time="2026-04-13T19:21:14.961635240Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:21:14.961855 containerd[1489]: time="2026-04-13T19:21:14.961786545Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:21:15.002732 systemd[1]: Started cri-containerd-552535cb6292e9e62063563fb70c997add1bd95414d6cc50e7e16fe710bff0c6.scope - libcontainer container 552535cb6292e9e62063563fb70c997add1bd95414d6cc50e7e16fe710bff0c6. Apr 13 19:21:15.058617 containerd[1489]: time="2026-04-13T19:21:15.058469055Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-gxbxf,Uid:81f175d8-d90c-45ef-9f27-602d07f6d20c,Namespace:kube-system,Attempt:1,} returns sandbox id \"552535cb6292e9e62063563fb70c997add1bd95414d6cc50e7e16fe710bff0c6\"" Apr 13 19:21:15.068605 containerd[1489]: time="2026-04-13T19:21:15.067925777Z" level=info msg="CreateContainer within sandbox \"552535cb6292e9e62063563fb70c997add1bd95414d6cc50e7e16fe710bff0c6\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 13 19:21:15.087105 containerd[1489]: time="2026-04-13T19:21:15.086822337Z" level=info msg="CreateContainer within sandbox \"552535cb6292e9e62063563fb70c997add1bd95414d6cc50e7e16fe710bff0c6\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"90a9bbc4bf62337089349e0f4eccff17ae217b920239a41f0e0f1e5084cbe8b3\"" Apr 13 19:21:15.088535 containerd[1489]: time="2026-04-13T19:21:15.088248121Z" level=info msg="StartContainer for \"90a9bbc4bf62337089349e0f4eccff17ae217b920239a41f0e0f1e5084cbe8b3\"" Apr 13 19:21:15.122302 systemd[1]: Started cri-containerd-90a9bbc4bf62337089349e0f4eccff17ae217b920239a41f0e0f1e5084cbe8b3.scope - libcontainer container 90a9bbc4bf62337089349e0f4eccff17ae217b920239a41f0e0f1e5084cbe8b3. Apr 13 19:21:15.159910 containerd[1489]: time="2026-04-13T19:21:15.159786688Z" level=info msg="StartContainer for \"90a9bbc4bf62337089349e0f4eccff17ae217b920239a41f0e0f1e5084cbe8b3\" returns successfully" Apr 13 19:21:15.598112 containerd[1489]: time="2026-04-13T19:21:15.597982894Z" level=info msg="StopPodSandbox for \"5b60475e7466061d777dc0f1b92a6ca6b85a2cfaf724cd0a86b1bd19b5927131\"" Apr 13 19:21:15.728374 containerd[1489]: 2026-04-13 19:21:15.673 [INFO][4642] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="5b60475e7466061d777dc0f1b92a6ca6b85a2cfaf724cd0a86b1bd19b5927131" Apr 13 19:21:15.728374 containerd[1489]: 2026-04-13 19:21:15.674 [INFO][4642] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="5b60475e7466061d777dc0f1b92a6ca6b85a2cfaf724cd0a86b1bd19b5927131" iface="eth0" netns="/var/run/netns/cni-9288f258-f618-8989-c15f-39512a62b30e" Apr 13 19:21:15.728374 containerd[1489]: 2026-04-13 19:21:15.674 [INFO][4642] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="5b60475e7466061d777dc0f1b92a6ca6b85a2cfaf724cd0a86b1bd19b5927131" iface="eth0" netns="/var/run/netns/cni-9288f258-f618-8989-c15f-39512a62b30e" Apr 13 19:21:15.728374 containerd[1489]: 2026-04-13 19:21:15.674 [INFO][4642] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="5b60475e7466061d777dc0f1b92a6ca6b85a2cfaf724cd0a86b1bd19b5927131" iface="eth0" netns="/var/run/netns/cni-9288f258-f618-8989-c15f-39512a62b30e" Apr 13 19:21:15.728374 containerd[1489]: 2026-04-13 19:21:15.674 [INFO][4642] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="5b60475e7466061d777dc0f1b92a6ca6b85a2cfaf724cd0a86b1bd19b5927131" Apr 13 19:21:15.728374 containerd[1489]: 2026-04-13 19:21:15.674 [INFO][4642] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="5b60475e7466061d777dc0f1b92a6ca6b85a2cfaf724cd0a86b1bd19b5927131" Apr 13 19:21:15.728374 containerd[1489]: 2026-04-13 19:21:15.707 [INFO][4658] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="5b60475e7466061d777dc0f1b92a6ca6b85a2cfaf724cd0a86b1bd19b5927131" HandleID="k8s-pod-network.5b60475e7466061d777dc0f1b92a6ca6b85a2cfaf724cd0a86b1bd19b5927131" Workload="ci--4081--3--7--b--7ea64c4796-k8s-calico--apiserver--7cd4769649--bxqw4-eth0" Apr 13 19:21:15.728374 containerd[1489]: 2026-04-13 19:21:15.707 [INFO][4658] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 19:21:15.728374 containerd[1489]: 2026-04-13 19:21:15.707 [INFO][4658] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 19:21:15.728374 containerd[1489]: 2026-04-13 19:21:15.718 [WARNING][4658] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="5b60475e7466061d777dc0f1b92a6ca6b85a2cfaf724cd0a86b1bd19b5927131" HandleID="k8s-pod-network.5b60475e7466061d777dc0f1b92a6ca6b85a2cfaf724cd0a86b1bd19b5927131" Workload="ci--4081--3--7--b--7ea64c4796-k8s-calico--apiserver--7cd4769649--bxqw4-eth0" Apr 13 19:21:15.728374 containerd[1489]: 2026-04-13 19:21:15.719 [INFO][4658] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="5b60475e7466061d777dc0f1b92a6ca6b85a2cfaf724cd0a86b1bd19b5927131" HandleID="k8s-pod-network.5b60475e7466061d777dc0f1b92a6ca6b85a2cfaf724cd0a86b1bd19b5927131" Workload="ci--4081--3--7--b--7ea64c4796-k8s-calico--apiserver--7cd4769649--bxqw4-eth0" Apr 13 19:21:15.728374 containerd[1489]: 2026-04-13 19:21:15.721 [INFO][4658] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 19:21:15.728374 containerd[1489]: 2026-04-13 19:21:15.725 [INFO][4642] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="5b60475e7466061d777dc0f1b92a6ca6b85a2cfaf724cd0a86b1bd19b5927131" Apr 13 19:21:15.730380 containerd[1489]: time="2026-04-13T19:21:15.728557829Z" level=info msg="TearDown network for sandbox \"5b60475e7466061d777dc0f1b92a6ca6b85a2cfaf724cd0a86b1bd19b5927131\" successfully" Apr 13 19:21:15.730380 containerd[1489]: time="2026-04-13T19:21:15.728636842Z" level=info msg="StopPodSandbox for \"5b60475e7466061d777dc0f1b92a6ca6b85a2cfaf724cd0a86b1bd19b5927131\" returns successfully" Apr 13 19:21:15.733417 containerd[1489]: time="2026-04-13T19:21:15.733250725Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7cd4769649-bxqw4,Uid:a8e45c92-17a2-4c83-ac2b-aa6192ba9a42,Namespace:calico-system,Attempt:1,}" Apr 13 19:21:15.740488 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3690888835.mount: Deactivated successfully. Apr 13 19:21:15.740597 systemd[1]: run-netns-cni\x2d9288f258\x2df618\x2d8989\x2dc15f\x2d39512a62b30e.mount: Deactivated successfully. Apr 13 19:21:15.867282 systemd-networkd[1382]: cali3d0b1d83e92: Gained IPv6LL Apr 13 19:21:15.937757 systemd-networkd[1382]: cali9510e29a13c: Link UP Apr 13 19:21:15.939827 systemd-networkd[1382]: cali9510e29a13c: Gained carrier Apr 13 19:21:15.965242 containerd[1489]: 2026-04-13 19:21:15.820 [INFO][4667] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--7--b--7ea64c4796-k8s-calico--apiserver--7cd4769649--bxqw4-eth0 calico-apiserver-7cd4769649- calico-system a8e45c92-17a2-4c83-ac2b-aa6192ba9a42 955 0 2026-04-13 19:20:41 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7cd4769649 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-7-b-7ea64c4796 calico-apiserver-7cd4769649-bxqw4 eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali9510e29a13c [] [] }} ContainerID="1dc99eebfc8435881f5e6dd4dc716e971520eb6b4c8f5a33f098decfde571a5d" Namespace="calico-system" Pod="calico-apiserver-7cd4769649-bxqw4" WorkloadEndpoint="ci--4081--3--7--b--7ea64c4796-k8s-calico--apiserver--7cd4769649--bxqw4-" Apr 13 19:21:15.965242 containerd[1489]: 2026-04-13 19:21:15.820 [INFO][4667] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1dc99eebfc8435881f5e6dd4dc716e971520eb6b4c8f5a33f098decfde571a5d" Namespace="calico-system" Pod="calico-apiserver-7cd4769649-bxqw4" WorkloadEndpoint="ci--4081--3--7--b--7ea64c4796-k8s-calico--apiserver--7cd4769649--bxqw4-eth0" Apr 13 19:21:15.965242 containerd[1489]: 2026-04-13 19:21:15.856 [INFO][4677] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1dc99eebfc8435881f5e6dd4dc716e971520eb6b4c8f5a33f098decfde571a5d" HandleID="k8s-pod-network.1dc99eebfc8435881f5e6dd4dc716e971520eb6b4c8f5a33f098decfde571a5d" Workload="ci--4081--3--7--b--7ea64c4796-k8s-calico--apiserver--7cd4769649--bxqw4-eth0" Apr 13 19:21:15.965242 containerd[1489]: 2026-04-13 19:21:15.874 [INFO][4677] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="1dc99eebfc8435881f5e6dd4dc716e971520eb6b4c8f5a33f098decfde571a5d" HandleID="k8s-pod-network.1dc99eebfc8435881f5e6dd4dc716e971520eb6b4c8f5a33f098decfde571a5d" Workload="ci--4081--3--7--b--7ea64c4796-k8s-calico--apiserver--7cd4769649--bxqw4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002ed510), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-7-b-7ea64c4796", "pod":"calico-apiserver-7cd4769649-bxqw4", "timestamp":"2026-04-13 19:21:15.856322445 +0000 UTC"}, Hostname:"ci-4081-3-7-b-7ea64c4796", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x400024d080)} Apr 13 19:21:15.965242 containerd[1489]: 2026-04-13 19:21:15.875 [INFO][4677] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 19:21:15.965242 containerd[1489]: 2026-04-13 19:21:15.875 [INFO][4677] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 19:21:15.965242 containerd[1489]: 2026-04-13 19:21:15.875 [INFO][4677] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-7-b-7ea64c4796' Apr 13 19:21:15.965242 containerd[1489]: 2026-04-13 19:21:15.883 [INFO][4677] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.1dc99eebfc8435881f5e6dd4dc716e971520eb6b4c8f5a33f098decfde571a5d" host="ci-4081-3-7-b-7ea64c4796" Apr 13 19:21:15.965242 containerd[1489]: 2026-04-13 19:21:15.891 [INFO][4677] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081-3-7-b-7ea64c4796" Apr 13 19:21:15.965242 containerd[1489]: 2026-04-13 19:21:15.901 [INFO][4677] ipam/ipam.go 526: Trying affinity for 192.168.62.128/26 host="ci-4081-3-7-b-7ea64c4796" Apr 13 19:21:15.965242 containerd[1489]: 2026-04-13 19:21:15.904 [INFO][4677] ipam/ipam.go 160: Attempting to load block cidr=192.168.62.128/26 host="ci-4081-3-7-b-7ea64c4796" Apr 13 19:21:15.965242 containerd[1489]: 2026-04-13 19:21:15.908 [INFO][4677] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.62.128/26 host="ci-4081-3-7-b-7ea64c4796" Apr 13 19:21:15.965242 containerd[1489]: 2026-04-13 19:21:15.908 [INFO][4677] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.62.128/26 handle="k8s-pod-network.1dc99eebfc8435881f5e6dd4dc716e971520eb6b4c8f5a33f098decfde571a5d" host="ci-4081-3-7-b-7ea64c4796" Apr 13 19:21:15.965242 containerd[1489]: 2026-04-13 19:21:15.911 [INFO][4677] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.1dc99eebfc8435881f5e6dd4dc716e971520eb6b4c8f5a33f098decfde571a5d Apr 13 19:21:15.965242 containerd[1489]: 2026-04-13 19:21:15.917 [INFO][4677] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.62.128/26 handle="k8s-pod-network.1dc99eebfc8435881f5e6dd4dc716e971520eb6b4c8f5a33f098decfde571a5d" host="ci-4081-3-7-b-7ea64c4796" Apr 13 19:21:15.965242 containerd[1489]: 2026-04-13 19:21:15.926 [INFO][4677] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.62.132/26] block=192.168.62.128/26 handle="k8s-pod-network.1dc99eebfc8435881f5e6dd4dc716e971520eb6b4c8f5a33f098decfde571a5d" host="ci-4081-3-7-b-7ea64c4796" Apr 13 19:21:15.965242 containerd[1489]: 2026-04-13 19:21:15.926 [INFO][4677] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.62.132/26] handle="k8s-pod-network.1dc99eebfc8435881f5e6dd4dc716e971520eb6b4c8f5a33f098decfde571a5d" host="ci-4081-3-7-b-7ea64c4796" Apr 13 19:21:15.965242 containerd[1489]: 2026-04-13 19:21:15.926 [INFO][4677] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 19:21:15.965242 containerd[1489]: 2026-04-13 19:21:15.926 [INFO][4677] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.62.132/26] IPv6=[] ContainerID="1dc99eebfc8435881f5e6dd4dc716e971520eb6b4c8f5a33f098decfde571a5d" HandleID="k8s-pod-network.1dc99eebfc8435881f5e6dd4dc716e971520eb6b4c8f5a33f098decfde571a5d" Workload="ci--4081--3--7--b--7ea64c4796-k8s-calico--apiserver--7cd4769649--bxqw4-eth0" Apr 13 19:21:15.965938 containerd[1489]: 2026-04-13 19:21:15.929 [INFO][4667] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1dc99eebfc8435881f5e6dd4dc716e971520eb6b4c8f5a33f098decfde571a5d" Namespace="calico-system" Pod="calico-apiserver-7cd4769649-bxqw4" WorkloadEndpoint="ci--4081--3--7--b--7ea64c4796-k8s-calico--apiserver--7cd4769649--bxqw4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--b--7ea64c4796-k8s-calico--apiserver--7cd4769649--bxqw4-eth0", GenerateName:"calico-apiserver-7cd4769649-", Namespace:"calico-system", SelfLink:"", UID:"a8e45c92-17a2-4c83-ac2b-aa6192ba9a42", ResourceVersion:"955", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 19, 20, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7cd4769649", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-b-7ea64c4796", ContainerID:"", Pod:"calico-apiserver-7cd4769649-bxqw4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.62.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali9510e29a13c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 19:21:15.965938 containerd[1489]: 2026-04-13 19:21:15.930 [INFO][4667] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.62.132/32] ContainerID="1dc99eebfc8435881f5e6dd4dc716e971520eb6b4c8f5a33f098decfde571a5d" Namespace="calico-system" Pod="calico-apiserver-7cd4769649-bxqw4" WorkloadEndpoint="ci--4081--3--7--b--7ea64c4796-k8s-calico--apiserver--7cd4769649--bxqw4-eth0" Apr 13 19:21:15.965938 containerd[1489]: 2026-04-13 19:21:15.930 [INFO][4667] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9510e29a13c ContainerID="1dc99eebfc8435881f5e6dd4dc716e971520eb6b4c8f5a33f098decfde571a5d" Namespace="calico-system" Pod="calico-apiserver-7cd4769649-bxqw4" WorkloadEndpoint="ci--4081--3--7--b--7ea64c4796-k8s-calico--apiserver--7cd4769649--bxqw4-eth0" Apr 13 19:21:15.965938 containerd[1489]: 2026-04-13 19:21:15.941 [INFO][4667] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1dc99eebfc8435881f5e6dd4dc716e971520eb6b4c8f5a33f098decfde571a5d" Namespace="calico-system" Pod="calico-apiserver-7cd4769649-bxqw4" WorkloadEndpoint="ci--4081--3--7--b--7ea64c4796-k8s-calico--apiserver--7cd4769649--bxqw4-eth0" Apr 13 19:21:15.965938 containerd[1489]: 2026-04-13 19:21:15.941 [INFO][4667] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1dc99eebfc8435881f5e6dd4dc716e971520eb6b4c8f5a33f098decfde571a5d" Namespace="calico-system" Pod="calico-apiserver-7cd4769649-bxqw4" WorkloadEndpoint="ci--4081--3--7--b--7ea64c4796-k8s-calico--apiserver--7cd4769649--bxqw4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--b--7ea64c4796-k8s-calico--apiserver--7cd4769649--bxqw4-eth0", GenerateName:"calico-apiserver-7cd4769649-", Namespace:"calico-system", SelfLink:"", UID:"a8e45c92-17a2-4c83-ac2b-aa6192ba9a42", ResourceVersion:"955", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 19, 20, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7cd4769649", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-b-7ea64c4796", ContainerID:"1dc99eebfc8435881f5e6dd4dc716e971520eb6b4c8f5a33f098decfde571a5d", Pod:"calico-apiserver-7cd4769649-bxqw4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.62.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali9510e29a13c", MAC:"42:cc:4e:c4:04:6a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 19:21:15.965938 containerd[1489]: 2026-04-13 19:21:15.961 [INFO][4667] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1dc99eebfc8435881f5e6dd4dc716e971520eb6b4c8f5a33f098decfde571a5d" Namespace="calico-system" Pod="calico-apiserver-7cd4769649-bxqw4" WorkloadEndpoint="ci--4081--3--7--b--7ea64c4796-k8s-calico--apiserver--7cd4769649--bxqw4-eth0" Apr 13 19:21:16.021007 containerd[1489]: time="2026-04-13T19:21:16.020689655Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 19:21:16.021007 containerd[1489]: time="2026-04-13T19:21:16.020889766Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 19:21:16.021007 containerd[1489]: time="2026-04-13T19:21:16.020954896Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:21:16.022384 containerd[1489]: time="2026-04-13T19:21:16.021365919Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:21:16.033850 kubelet[2589]: I0413 19:21:16.033462 2589 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-gxbxf" podStartSLOduration=47.033445615 podStartE2EDuration="47.033445615s" podCreationTimestamp="2026-04-13 19:20:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-13 19:21:16.033264267 +0000 UTC m=+52.596975434" watchObservedRunningTime="2026-04-13 19:21:16.033445615 +0000 UTC m=+52.597156781" Apr 13 19:21:16.091266 systemd[1]: Started cri-containerd-1dc99eebfc8435881f5e6dd4dc716e971520eb6b4c8f5a33f098decfde571a5d.scope - libcontainer container 1dc99eebfc8435881f5e6dd4dc716e971520eb6b4c8f5a33f098decfde571a5d. Apr 13 19:21:16.207376 containerd[1489]: time="2026-04-13T19:21:16.207333377Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7cd4769649-bxqw4,Uid:a8e45c92-17a2-4c83-ac2b-aa6192ba9a42,Namespace:calico-system,Attempt:1,} returns sandbox id \"1dc99eebfc8435881f5e6dd4dc716e971520eb6b4c8f5a33f098decfde571a5d\"" Apr 13 19:21:16.573388 systemd-networkd[1382]: cali9679fda3ab7: Gained IPv6LL Apr 13 19:21:16.592505 containerd[1489]: time="2026-04-13T19:21:16.592440518Z" level=info msg="StopPodSandbox for \"045cd65edaa6c8f18fe5606fbaf84468781a0a29bd5558933eb0035e4743a128\"" Apr 13 19:21:16.593750 containerd[1489]: time="2026-04-13T19:21:16.592390070Z" level=info msg="StopPodSandbox for \"de15ca58bacce88132857c6397993b4f8e430764ca12d55bf7f802cb4853d684\"" Apr 13 19:21:16.595799 containerd[1489]: time="2026-04-13T19:21:16.595749547Z" level=info msg="StopPodSandbox for \"5256e062bc75846b331872bf0764115c5c34e5ad18b17031951ea827fe311a3c\"" Apr 13 19:21:16.740807 systemd[1]: run-containerd-runc-k8s.io-1dc99eebfc8435881f5e6dd4dc716e971520eb6b4c8f5a33f098decfde571a5d-runc.6tN5fe.mount: Deactivated successfully. Apr 13 19:21:16.819345 containerd[1489]: 2026-04-13 19:21:16.721 [INFO][4779] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="de15ca58bacce88132857c6397993b4f8e430764ca12d55bf7f802cb4853d684" Apr 13 19:21:16.819345 containerd[1489]: 2026-04-13 19:21:16.721 [INFO][4779] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="de15ca58bacce88132857c6397993b4f8e430764ca12d55bf7f802cb4853d684" iface="eth0" netns="/var/run/netns/cni-2762b782-0c19-efcc-67b0-e6ad60a9e054" Apr 13 19:21:16.819345 containerd[1489]: 2026-04-13 19:21:16.723 [INFO][4779] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="de15ca58bacce88132857c6397993b4f8e430764ca12d55bf7f802cb4853d684" iface="eth0" netns="/var/run/netns/cni-2762b782-0c19-efcc-67b0-e6ad60a9e054" Apr 13 19:21:16.819345 containerd[1489]: 2026-04-13 19:21:16.726 [INFO][4779] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="de15ca58bacce88132857c6397993b4f8e430764ca12d55bf7f802cb4853d684" iface="eth0" netns="/var/run/netns/cni-2762b782-0c19-efcc-67b0-e6ad60a9e054" Apr 13 19:21:16.819345 containerd[1489]: 2026-04-13 19:21:16.726 [INFO][4779] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="de15ca58bacce88132857c6397993b4f8e430764ca12d55bf7f802cb4853d684" Apr 13 19:21:16.819345 containerd[1489]: 2026-04-13 19:21:16.726 [INFO][4779] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="de15ca58bacce88132857c6397993b4f8e430764ca12d55bf7f802cb4853d684" Apr 13 19:21:16.819345 containerd[1489]: 2026-04-13 19:21:16.787 [INFO][4804] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="de15ca58bacce88132857c6397993b4f8e430764ca12d55bf7f802cb4853d684" HandleID="k8s-pod-network.de15ca58bacce88132857c6397993b4f8e430764ca12d55bf7f802cb4853d684" Workload="ci--4081--3--7--b--7ea64c4796-k8s-calico--apiserver--7cd4769649--d4nq8-eth0" Apr 13 19:21:16.819345 containerd[1489]: 2026-04-13 19:21:16.787 [INFO][4804] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 19:21:16.819345 containerd[1489]: 2026-04-13 19:21:16.787 [INFO][4804] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 19:21:16.819345 containerd[1489]: 2026-04-13 19:21:16.807 [WARNING][4804] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="de15ca58bacce88132857c6397993b4f8e430764ca12d55bf7f802cb4853d684" HandleID="k8s-pod-network.de15ca58bacce88132857c6397993b4f8e430764ca12d55bf7f802cb4853d684" Workload="ci--4081--3--7--b--7ea64c4796-k8s-calico--apiserver--7cd4769649--d4nq8-eth0" Apr 13 19:21:16.819345 containerd[1489]: 2026-04-13 19:21:16.807 [INFO][4804] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="de15ca58bacce88132857c6397993b4f8e430764ca12d55bf7f802cb4853d684" HandleID="k8s-pod-network.de15ca58bacce88132857c6397993b4f8e430764ca12d55bf7f802cb4853d684" Workload="ci--4081--3--7--b--7ea64c4796-k8s-calico--apiserver--7cd4769649--d4nq8-eth0" Apr 13 19:21:16.819345 containerd[1489]: 2026-04-13 19:21:16.809 [INFO][4804] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 19:21:16.819345 containerd[1489]: 2026-04-13 19:21:16.815 [INFO][4779] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="de15ca58bacce88132857c6397993b4f8e430764ca12d55bf7f802cb4853d684" Apr 13 19:21:16.823587 containerd[1489]: time="2026-04-13T19:21:16.821382380Z" level=info msg="TearDown network for sandbox \"de15ca58bacce88132857c6397993b4f8e430764ca12d55bf7f802cb4853d684\" successfully" Apr 13 19:21:16.823587 containerd[1489]: time="2026-04-13T19:21:16.821427387Z" level=info msg="StopPodSandbox for \"de15ca58bacce88132857c6397993b4f8e430764ca12d55bf7f802cb4853d684\" returns successfully" Apr 13 19:21:16.825536 systemd[1]: run-netns-cni\x2d2762b782\x2d0c19\x2defcc\x2d67b0\x2de6ad60a9e054.mount: Deactivated successfully. Apr 13 19:21:16.830848 containerd[1489]: time="2026-04-13T19:21:16.830328715Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7cd4769649-d4nq8,Uid:f5fa3599-3a44-4a37-a228-2b86b422f5c0,Namespace:calico-system,Attempt:1,}" Apr 13 19:21:16.891659 containerd[1489]: 2026-04-13 19:21:16.771 [INFO][4789] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="5256e062bc75846b331872bf0764115c5c34e5ad18b17031951ea827fe311a3c" Apr 13 19:21:16.891659 containerd[1489]: 2026-04-13 19:21:16.771 [INFO][4789] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="5256e062bc75846b331872bf0764115c5c34e5ad18b17031951ea827fe311a3c" iface="eth0" netns="/var/run/netns/cni-ac9fdf46-28f8-ffbe-8a8f-b0c620a032af" Apr 13 19:21:16.891659 containerd[1489]: 2026-04-13 19:21:16.772 [INFO][4789] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="5256e062bc75846b331872bf0764115c5c34e5ad18b17031951ea827fe311a3c" iface="eth0" netns="/var/run/netns/cni-ac9fdf46-28f8-ffbe-8a8f-b0c620a032af" Apr 13 19:21:16.891659 containerd[1489]: 2026-04-13 19:21:16.772 [INFO][4789] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="5256e062bc75846b331872bf0764115c5c34e5ad18b17031951ea827fe311a3c" iface="eth0" netns="/var/run/netns/cni-ac9fdf46-28f8-ffbe-8a8f-b0c620a032af" Apr 13 19:21:16.891659 containerd[1489]: 2026-04-13 19:21:16.773 [INFO][4789] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="5256e062bc75846b331872bf0764115c5c34e5ad18b17031951ea827fe311a3c" Apr 13 19:21:16.891659 containerd[1489]: 2026-04-13 19:21:16.773 [INFO][4789] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="5256e062bc75846b331872bf0764115c5c34e5ad18b17031951ea827fe311a3c" Apr 13 19:21:16.891659 containerd[1489]: 2026-04-13 19:21:16.845 [INFO][4811] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="5256e062bc75846b331872bf0764115c5c34e5ad18b17031951ea827fe311a3c" HandleID="k8s-pod-network.5256e062bc75846b331872bf0764115c5c34e5ad18b17031951ea827fe311a3c" Workload="ci--4081--3--7--b--7ea64c4796-k8s-coredns--66bc5c9577--4mf58-eth0" Apr 13 19:21:16.891659 containerd[1489]: 2026-04-13 19:21:16.846 [INFO][4811] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 19:21:16.891659 containerd[1489]: 2026-04-13 19:21:16.846 [INFO][4811] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 19:21:16.891659 containerd[1489]: 2026-04-13 19:21:16.877 [WARNING][4811] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="5256e062bc75846b331872bf0764115c5c34e5ad18b17031951ea827fe311a3c" HandleID="k8s-pod-network.5256e062bc75846b331872bf0764115c5c34e5ad18b17031951ea827fe311a3c" Workload="ci--4081--3--7--b--7ea64c4796-k8s-coredns--66bc5c9577--4mf58-eth0" Apr 13 19:21:16.891659 containerd[1489]: 2026-04-13 19:21:16.878 [INFO][4811] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="5256e062bc75846b331872bf0764115c5c34e5ad18b17031951ea827fe311a3c" HandleID="k8s-pod-network.5256e062bc75846b331872bf0764115c5c34e5ad18b17031951ea827fe311a3c" Workload="ci--4081--3--7--b--7ea64c4796-k8s-coredns--66bc5c9577--4mf58-eth0" Apr 13 19:21:16.891659 containerd[1489]: 2026-04-13 19:21:16.884 [INFO][4811] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 19:21:16.891659 containerd[1489]: 2026-04-13 19:21:16.889 [INFO][4789] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="5256e062bc75846b331872bf0764115c5c34e5ad18b17031951ea827fe311a3c" Apr 13 19:21:16.892139 containerd[1489]: time="2026-04-13T19:21:16.892109489Z" level=info msg="TearDown network for sandbox \"5256e062bc75846b331872bf0764115c5c34e5ad18b17031951ea827fe311a3c\" successfully" Apr 13 19:21:16.892163 containerd[1489]: time="2026-04-13T19:21:16.892143614Z" level=info msg="StopPodSandbox for \"5256e062bc75846b331872bf0764115c5c34e5ad18b17031951ea827fe311a3c\" returns successfully" Apr 13 19:21:16.895662 systemd[1]: run-netns-cni\x2dac9fdf46\x2d28f8\x2dffbe\x2d8a8f\x2db0c620a032af.mount: Deactivated successfully. Apr 13 19:21:16.901645 containerd[1489]: time="2026-04-13T19:21:16.901575664Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-4mf58,Uid:37df8aee-778a-40ce-bb48-a75145d05544,Namespace:kube-system,Attempt:1,}" Apr 13 19:21:16.933140 containerd[1489]: 2026-04-13 19:21:16.797 [INFO][4790] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="045cd65edaa6c8f18fe5606fbaf84468781a0a29bd5558933eb0035e4743a128" Apr 13 19:21:16.933140 containerd[1489]: 2026-04-13 19:21:16.798 [INFO][4790] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="045cd65edaa6c8f18fe5606fbaf84468781a0a29bd5558933eb0035e4743a128" iface="eth0" netns="/var/run/netns/cni-0a138846-1fb2-e2ac-9af6-788c93345994" Apr 13 19:21:16.933140 containerd[1489]: 2026-04-13 19:21:16.798 [INFO][4790] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="045cd65edaa6c8f18fe5606fbaf84468781a0a29bd5558933eb0035e4743a128" iface="eth0" netns="/var/run/netns/cni-0a138846-1fb2-e2ac-9af6-788c93345994" Apr 13 19:21:16.933140 containerd[1489]: 2026-04-13 19:21:16.799 [INFO][4790] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="045cd65edaa6c8f18fe5606fbaf84468781a0a29bd5558933eb0035e4743a128" iface="eth0" netns="/var/run/netns/cni-0a138846-1fb2-e2ac-9af6-788c93345994" Apr 13 19:21:16.933140 containerd[1489]: 2026-04-13 19:21:16.799 [INFO][4790] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="045cd65edaa6c8f18fe5606fbaf84468781a0a29bd5558933eb0035e4743a128" Apr 13 19:21:16.933140 containerd[1489]: 2026-04-13 19:21:16.799 [INFO][4790] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="045cd65edaa6c8f18fe5606fbaf84468781a0a29bd5558933eb0035e4743a128" Apr 13 19:21:16.933140 containerd[1489]: 2026-04-13 19:21:16.904 [INFO][4819] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="045cd65edaa6c8f18fe5606fbaf84468781a0a29bd5558933eb0035e4743a128" HandleID="k8s-pod-network.045cd65edaa6c8f18fe5606fbaf84468781a0a29bd5558933eb0035e4743a128" Workload="ci--4081--3--7--b--7ea64c4796-k8s-csi--node--driver--t82fb-eth0" Apr 13 19:21:16.933140 containerd[1489]: 2026-04-13 19:21:16.904 [INFO][4819] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 19:21:16.933140 containerd[1489]: 2026-04-13 19:21:16.904 [INFO][4819] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 19:21:16.933140 containerd[1489]: 2026-04-13 19:21:16.921 [WARNING][4819] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="045cd65edaa6c8f18fe5606fbaf84468781a0a29bd5558933eb0035e4743a128" HandleID="k8s-pod-network.045cd65edaa6c8f18fe5606fbaf84468781a0a29bd5558933eb0035e4743a128" Workload="ci--4081--3--7--b--7ea64c4796-k8s-csi--node--driver--t82fb-eth0" Apr 13 19:21:16.933140 containerd[1489]: 2026-04-13 19:21:16.921 [INFO][4819] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="045cd65edaa6c8f18fe5606fbaf84468781a0a29bd5558933eb0035e4743a128" HandleID="k8s-pod-network.045cd65edaa6c8f18fe5606fbaf84468781a0a29bd5558933eb0035e4743a128" Workload="ci--4081--3--7--b--7ea64c4796-k8s-csi--node--driver--t82fb-eth0" Apr 13 19:21:16.933140 containerd[1489]: 2026-04-13 19:21:16.925 [INFO][4819] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 19:21:16.933140 containerd[1489]: 2026-04-13 19:21:16.928 [INFO][4790] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="045cd65edaa6c8f18fe5606fbaf84468781a0a29bd5558933eb0035e4743a128" Apr 13 19:21:16.935474 containerd[1489]: time="2026-04-13T19:21:16.933660234Z" level=info msg="TearDown network for sandbox \"045cd65edaa6c8f18fe5606fbaf84468781a0a29bd5558933eb0035e4743a128\" successfully" Apr 13 19:21:16.935474 containerd[1489]: time="2026-04-13T19:21:16.933695760Z" level=info msg="StopPodSandbox for \"045cd65edaa6c8f18fe5606fbaf84468781a0a29bd5558933eb0035e4743a128\" returns successfully" Apr 13 19:21:16.941132 containerd[1489]: time="2026-04-13T19:21:16.941072133Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-t82fb,Uid:4e1c37d2-8528-4b9f-9fc1-29276f518f72,Namespace:calico-system,Attempt:1,}" Apr 13 19:21:16.943681 systemd[1]: run-netns-cni\x2d0a138846\x2d1fb2\x2de2ac\x2d9af6\x2d788c93345994.mount: Deactivated successfully. Apr 13 19:21:17.182647 systemd-networkd[1382]: calid36aebc3b14: Link UP Apr 13 19:21:17.183709 systemd-networkd[1382]: calid36aebc3b14: Gained carrier Apr 13 19:21:17.225902 containerd[1489]: 2026-04-13 19:21:16.994 [INFO][4825] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--7--b--7ea64c4796-k8s-calico--apiserver--7cd4769649--d4nq8-eth0 calico-apiserver-7cd4769649- calico-system f5fa3599-3a44-4a37-a228-2b86b422f5c0 971 0 2026-04-13 19:20:41 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7cd4769649 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-7-b-7ea64c4796 calico-apiserver-7cd4769649-d4nq8 eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] calid36aebc3b14 [] [] }} ContainerID="4107dd4f61f2bb04ce04e7a3b1440b14db75029f5c7d6ca0eec32784df96de24" Namespace="calico-system" Pod="calico-apiserver-7cd4769649-d4nq8" WorkloadEndpoint="ci--4081--3--7--b--7ea64c4796-k8s-calico--apiserver--7cd4769649--d4nq8-" Apr 13 19:21:17.225902 containerd[1489]: 2026-04-13 19:21:16.994 [INFO][4825] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4107dd4f61f2bb04ce04e7a3b1440b14db75029f5c7d6ca0eec32784df96de24" Namespace="calico-system" Pod="calico-apiserver-7cd4769649-d4nq8" WorkloadEndpoint="ci--4081--3--7--b--7ea64c4796-k8s-calico--apiserver--7cd4769649--d4nq8-eth0" Apr 13 19:21:17.225902 containerd[1489]: 2026-04-13 19:21:17.073 [INFO][4862] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4107dd4f61f2bb04ce04e7a3b1440b14db75029f5c7d6ca0eec32784df96de24" HandleID="k8s-pod-network.4107dd4f61f2bb04ce04e7a3b1440b14db75029f5c7d6ca0eec32784df96de24" Workload="ci--4081--3--7--b--7ea64c4796-k8s-calico--apiserver--7cd4769649--d4nq8-eth0" Apr 13 19:21:17.225902 containerd[1489]: 2026-04-13 19:21:17.102 [INFO][4862] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="4107dd4f61f2bb04ce04e7a3b1440b14db75029f5c7d6ca0eec32784df96de24" HandleID="k8s-pod-network.4107dd4f61f2bb04ce04e7a3b1440b14db75029f5c7d6ca0eec32784df96de24" Workload="ci--4081--3--7--b--7ea64c4796-k8s-calico--apiserver--7cd4769649--d4nq8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003816e0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-7-b-7ea64c4796", "pod":"calico-apiserver-7cd4769649-d4nq8", "timestamp":"2026-04-13 19:21:17.073770762 +0000 UTC"}, Hostname:"ci-4081-3-7-b-7ea64c4796", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40003f71e0)} Apr 13 19:21:17.225902 containerd[1489]: 2026-04-13 19:21:17.102 [INFO][4862] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 19:21:17.225902 containerd[1489]: 2026-04-13 19:21:17.103 [INFO][4862] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 19:21:17.225902 containerd[1489]: 2026-04-13 19:21:17.103 [INFO][4862] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-7-b-7ea64c4796' Apr 13 19:21:17.225902 containerd[1489]: 2026-04-13 19:21:17.108 [INFO][4862] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.4107dd4f61f2bb04ce04e7a3b1440b14db75029f5c7d6ca0eec32784df96de24" host="ci-4081-3-7-b-7ea64c4796" Apr 13 19:21:17.225902 containerd[1489]: 2026-04-13 19:21:17.118 [INFO][4862] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081-3-7-b-7ea64c4796" Apr 13 19:21:17.225902 containerd[1489]: 2026-04-13 19:21:17.125 [INFO][4862] ipam/ipam.go 526: Trying affinity for 192.168.62.128/26 host="ci-4081-3-7-b-7ea64c4796" Apr 13 19:21:17.225902 containerd[1489]: 2026-04-13 19:21:17.130 [INFO][4862] ipam/ipam.go 160: Attempting to load block cidr=192.168.62.128/26 host="ci-4081-3-7-b-7ea64c4796" Apr 13 19:21:17.225902 containerd[1489]: 2026-04-13 19:21:17.134 [INFO][4862] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.62.128/26 host="ci-4081-3-7-b-7ea64c4796" Apr 13 19:21:17.225902 containerd[1489]: 2026-04-13 19:21:17.134 [INFO][4862] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.62.128/26 handle="k8s-pod-network.4107dd4f61f2bb04ce04e7a3b1440b14db75029f5c7d6ca0eec32784df96de24" host="ci-4081-3-7-b-7ea64c4796" Apr 13 19:21:17.225902 containerd[1489]: 2026-04-13 19:21:17.137 [INFO][4862] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.4107dd4f61f2bb04ce04e7a3b1440b14db75029f5c7d6ca0eec32784df96de24 Apr 13 19:21:17.225902 containerd[1489]: 2026-04-13 19:21:17.143 [INFO][4862] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.62.128/26 handle="k8s-pod-network.4107dd4f61f2bb04ce04e7a3b1440b14db75029f5c7d6ca0eec32784df96de24" host="ci-4081-3-7-b-7ea64c4796" Apr 13 19:21:17.225902 containerd[1489]: 2026-04-13 19:21:17.158 [INFO][4862] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.62.133/26] block=192.168.62.128/26 handle="k8s-pod-network.4107dd4f61f2bb04ce04e7a3b1440b14db75029f5c7d6ca0eec32784df96de24" host="ci-4081-3-7-b-7ea64c4796" Apr 13 19:21:17.225902 containerd[1489]: 2026-04-13 19:21:17.158 [INFO][4862] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.62.133/26] handle="k8s-pod-network.4107dd4f61f2bb04ce04e7a3b1440b14db75029f5c7d6ca0eec32784df96de24" host="ci-4081-3-7-b-7ea64c4796" Apr 13 19:21:17.225902 containerd[1489]: 2026-04-13 19:21:17.165 [INFO][4862] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 19:21:17.225902 containerd[1489]: 2026-04-13 19:21:17.166 [INFO][4862] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.62.133/26] IPv6=[] ContainerID="4107dd4f61f2bb04ce04e7a3b1440b14db75029f5c7d6ca0eec32784df96de24" HandleID="k8s-pod-network.4107dd4f61f2bb04ce04e7a3b1440b14db75029f5c7d6ca0eec32784df96de24" Workload="ci--4081--3--7--b--7ea64c4796-k8s-calico--apiserver--7cd4769649--d4nq8-eth0" Apr 13 19:21:17.227549 containerd[1489]: 2026-04-13 19:21:17.174 [INFO][4825] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4107dd4f61f2bb04ce04e7a3b1440b14db75029f5c7d6ca0eec32784df96de24" Namespace="calico-system" Pod="calico-apiserver-7cd4769649-d4nq8" WorkloadEndpoint="ci--4081--3--7--b--7ea64c4796-k8s-calico--apiserver--7cd4769649--d4nq8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--b--7ea64c4796-k8s-calico--apiserver--7cd4769649--d4nq8-eth0", GenerateName:"calico-apiserver-7cd4769649-", Namespace:"calico-system", SelfLink:"", UID:"f5fa3599-3a44-4a37-a228-2b86b422f5c0", ResourceVersion:"971", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 19, 20, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7cd4769649", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-b-7ea64c4796", ContainerID:"", Pod:"calico-apiserver-7cd4769649-d4nq8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.62.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calid36aebc3b14", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 19:21:17.227549 containerd[1489]: 2026-04-13 19:21:17.174 [INFO][4825] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.62.133/32] ContainerID="4107dd4f61f2bb04ce04e7a3b1440b14db75029f5c7d6ca0eec32784df96de24" Namespace="calico-system" Pod="calico-apiserver-7cd4769649-d4nq8" WorkloadEndpoint="ci--4081--3--7--b--7ea64c4796-k8s-calico--apiserver--7cd4769649--d4nq8-eth0" Apr 13 19:21:17.227549 containerd[1489]: 2026-04-13 19:21:17.174 [INFO][4825] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid36aebc3b14 ContainerID="4107dd4f61f2bb04ce04e7a3b1440b14db75029f5c7d6ca0eec32784df96de24" Namespace="calico-system" Pod="calico-apiserver-7cd4769649-d4nq8" WorkloadEndpoint="ci--4081--3--7--b--7ea64c4796-k8s-calico--apiserver--7cd4769649--d4nq8-eth0" Apr 13 19:21:17.227549 containerd[1489]: 2026-04-13 19:21:17.187 [INFO][4825] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4107dd4f61f2bb04ce04e7a3b1440b14db75029f5c7d6ca0eec32784df96de24" Namespace="calico-system" Pod="calico-apiserver-7cd4769649-d4nq8" WorkloadEndpoint="ci--4081--3--7--b--7ea64c4796-k8s-calico--apiserver--7cd4769649--d4nq8-eth0" Apr 13 19:21:17.227549 containerd[1489]: 2026-04-13 19:21:17.189 [INFO][4825] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4107dd4f61f2bb04ce04e7a3b1440b14db75029f5c7d6ca0eec32784df96de24" Namespace="calico-system" Pod="calico-apiserver-7cd4769649-d4nq8" WorkloadEndpoint="ci--4081--3--7--b--7ea64c4796-k8s-calico--apiserver--7cd4769649--d4nq8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--b--7ea64c4796-k8s-calico--apiserver--7cd4769649--d4nq8-eth0", GenerateName:"calico-apiserver-7cd4769649-", Namespace:"calico-system", SelfLink:"", UID:"f5fa3599-3a44-4a37-a228-2b86b422f5c0", ResourceVersion:"971", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 19, 20, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7cd4769649", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-b-7ea64c4796", ContainerID:"4107dd4f61f2bb04ce04e7a3b1440b14db75029f5c7d6ca0eec32784df96de24", Pod:"calico-apiserver-7cd4769649-d4nq8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.62.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calid36aebc3b14", MAC:"76:a8:90:5d:2a:2f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 19:21:17.227549 containerd[1489]: 2026-04-13 19:21:17.206 [INFO][4825] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4107dd4f61f2bb04ce04e7a3b1440b14db75029f5c7d6ca0eec32784df96de24" Namespace="calico-system" Pod="calico-apiserver-7cd4769649-d4nq8" WorkloadEndpoint="ci--4081--3--7--b--7ea64c4796-k8s-calico--apiserver--7cd4769649--d4nq8-eth0" Apr 13 19:21:17.328571 systemd-networkd[1382]: cali4cf04e948ca: Link UP Apr 13 19:21:17.331089 systemd-networkd[1382]: cali4cf04e948ca: Gained carrier Apr 13 19:21:17.351423 containerd[1489]: time="2026-04-13T19:21:17.349894343Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 19:21:17.351423 containerd[1489]: time="2026-04-13T19:21:17.349980476Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 19:21:17.351423 containerd[1489]: time="2026-04-13T19:21:17.349997638Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:21:17.351423 containerd[1489]: time="2026-04-13T19:21:17.350132099Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:21:17.390261 systemd[1]: Started cri-containerd-4107dd4f61f2bb04ce04e7a3b1440b14db75029f5c7d6ca0eec32784df96de24.scope - libcontainer container 4107dd4f61f2bb04ce04e7a3b1440b14db75029f5c7d6ca0eec32784df96de24. Apr 13 19:21:17.392241 containerd[1489]: 2026-04-13 19:21:17.051 [INFO][4840] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--7--b--7ea64c4796-k8s-coredns--66bc5c9577--4mf58-eth0 coredns-66bc5c9577- kube-system 37df8aee-778a-40ce-bb48-a75145d05544 972 0 2026-04-13 19:20:29 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-3-7-b-7ea64c4796 coredns-66bc5c9577-4mf58 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali4cf04e948ca [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="f1907663454aac4b795da0e2f665582c014f45ed245f0bfe02b32e7ddc52781a" Namespace="kube-system" Pod="coredns-66bc5c9577-4mf58" WorkloadEndpoint="ci--4081--3--7--b--7ea64c4796-k8s-coredns--66bc5c9577--4mf58-" Apr 13 19:21:17.392241 containerd[1489]: 2026-04-13 19:21:17.051 [INFO][4840] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f1907663454aac4b795da0e2f665582c014f45ed245f0bfe02b32e7ddc52781a" Namespace="kube-system" Pod="coredns-66bc5c9577-4mf58" WorkloadEndpoint="ci--4081--3--7--b--7ea64c4796-k8s-coredns--66bc5c9577--4mf58-eth0" Apr 13 19:21:17.392241 containerd[1489]: 2026-04-13 19:21:17.176 [INFO][4870] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f1907663454aac4b795da0e2f665582c014f45ed245f0bfe02b32e7ddc52781a" HandleID="k8s-pod-network.f1907663454aac4b795da0e2f665582c014f45ed245f0bfe02b32e7ddc52781a" Workload="ci--4081--3--7--b--7ea64c4796-k8s-coredns--66bc5c9577--4mf58-eth0" Apr 13 19:21:17.392241 containerd[1489]: 2026-04-13 19:21:17.208 [INFO][4870] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="f1907663454aac4b795da0e2f665582c014f45ed245f0bfe02b32e7ddc52781a" HandleID="k8s-pod-network.f1907663454aac4b795da0e2f665582c014f45ed245f0bfe02b32e7ddc52781a" Workload="ci--4081--3--7--b--7ea64c4796-k8s-coredns--66bc5c9577--4mf58-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000273c10), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-3-7-b-7ea64c4796", "pod":"coredns-66bc5c9577-4mf58", "timestamp":"2026-04-13 19:21:17.176193695 +0000 UTC"}, Hostname:"ci-4081-3-7-b-7ea64c4796", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x400010edc0)} Apr 13 19:21:17.392241 containerd[1489]: 2026-04-13 19:21:17.210 [INFO][4870] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 19:21:17.392241 containerd[1489]: 2026-04-13 19:21:17.212 [INFO][4870] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 19:21:17.392241 containerd[1489]: 2026-04-13 19:21:17.218 [INFO][4870] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-7-b-7ea64c4796' Apr 13 19:21:17.392241 containerd[1489]: 2026-04-13 19:21:17.232 [INFO][4870] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.f1907663454aac4b795da0e2f665582c014f45ed245f0bfe02b32e7ddc52781a" host="ci-4081-3-7-b-7ea64c4796" Apr 13 19:21:17.392241 containerd[1489]: 2026-04-13 19:21:17.242 [INFO][4870] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081-3-7-b-7ea64c4796" Apr 13 19:21:17.392241 containerd[1489]: 2026-04-13 19:21:17.269 [INFO][4870] ipam/ipam.go 526: Trying affinity for 192.168.62.128/26 host="ci-4081-3-7-b-7ea64c4796" Apr 13 19:21:17.392241 containerd[1489]: 2026-04-13 19:21:17.275 [INFO][4870] ipam/ipam.go 160: Attempting to load block cidr=192.168.62.128/26 host="ci-4081-3-7-b-7ea64c4796" Apr 13 19:21:17.392241 containerd[1489]: 2026-04-13 19:21:17.282 [INFO][4870] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.62.128/26 host="ci-4081-3-7-b-7ea64c4796" Apr 13 19:21:17.392241 containerd[1489]: 2026-04-13 19:21:17.283 [INFO][4870] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.62.128/26 handle="k8s-pod-network.f1907663454aac4b795da0e2f665582c014f45ed245f0bfe02b32e7ddc52781a" host="ci-4081-3-7-b-7ea64c4796" Apr 13 19:21:17.392241 containerd[1489]: 2026-04-13 19:21:17.290 [INFO][4870] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.f1907663454aac4b795da0e2f665582c014f45ed245f0bfe02b32e7ddc52781a Apr 13 19:21:17.392241 containerd[1489]: 2026-04-13 19:21:17.299 [INFO][4870] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.62.128/26 handle="k8s-pod-network.f1907663454aac4b795da0e2f665582c014f45ed245f0bfe02b32e7ddc52781a" host="ci-4081-3-7-b-7ea64c4796" Apr 13 19:21:17.392241 containerd[1489]: 2026-04-13 19:21:17.314 [INFO][4870] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.62.134/26] block=192.168.62.128/26 handle="k8s-pod-network.f1907663454aac4b795da0e2f665582c014f45ed245f0bfe02b32e7ddc52781a" host="ci-4081-3-7-b-7ea64c4796" Apr 13 19:21:17.392241 containerd[1489]: 2026-04-13 19:21:17.314 [INFO][4870] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.62.134/26] handle="k8s-pod-network.f1907663454aac4b795da0e2f665582c014f45ed245f0bfe02b32e7ddc52781a" host="ci-4081-3-7-b-7ea64c4796" Apr 13 19:21:17.392241 containerd[1489]: 2026-04-13 19:21:17.314 [INFO][4870] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 19:21:17.392241 containerd[1489]: 2026-04-13 19:21:17.314 [INFO][4870] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.62.134/26] IPv6=[] ContainerID="f1907663454aac4b795da0e2f665582c014f45ed245f0bfe02b32e7ddc52781a" HandleID="k8s-pod-network.f1907663454aac4b795da0e2f665582c014f45ed245f0bfe02b32e7ddc52781a" Workload="ci--4081--3--7--b--7ea64c4796-k8s-coredns--66bc5c9577--4mf58-eth0" Apr 13 19:21:17.392814 containerd[1489]: 2026-04-13 19:21:17.323 [INFO][4840] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f1907663454aac4b795da0e2f665582c014f45ed245f0bfe02b32e7ddc52781a" Namespace="kube-system" Pod="coredns-66bc5c9577-4mf58" WorkloadEndpoint="ci--4081--3--7--b--7ea64c4796-k8s-coredns--66bc5c9577--4mf58-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--b--7ea64c4796-k8s-coredns--66bc5c9577--4mf58-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"37df8aee-778a-40ce-bb48-a75145d05544", ResourceVersion:"972", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 19, 20, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-b-7ea64c4796", ContainerID:"", Pod:"coredns-66bc5c9577-4mf58", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.62.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4cf04e948ca", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 19:21:17.392814 containerd[1489]: 2026-04-13 19:21:17.324 [INFO][4840] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.62.134/32] ContainerID="f1907663454aac4b795da0e2f665582c014f45ed245f0bfe02b32e7ddc52781a" Namespace="kube-system" Pod="coredns-66bc5c9577-4mf58" WorkloadEndpoint="ci--4081--3--7--b--7ea64c4796-k8s-coredns--66bc5c9577--4mf58-eth0" Apr 13 19:21:17.392814 containerd[1489]: 2026-04-13 19:21:17.324 [INFO][4840] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4cf04e948ca ContainerID="f1907663454aac4b795da0e2f665582c014f45ed245f0bfe02b32e7ddc52781a" Namespace="kube-system" Pod="coredns-66bc5c9577-4mf58" WorkloadEndpoint="ci--4081--3--7--b--7ea64c4796-k8s-coredns--66bc5c9577--4mf58-eth0" Apr 13 19:21:17.392814 containerd[1489]: 2026-04-13 19:21:17.338 [INFO][4840] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f1907663454aac4b795da0e2f665582c014f45ed245f0bfe02b32e7ddc52781a" Namespace="kube-system" Pod="coredns-66bc5c9577-4mf58" WorkloadEndpoint="ci--4081--3--7--b--7ea64c4796-k8s-coredns--66bc5c9577--4mf58-eth0" Apr 13 19:21:17.392814 containerd[1489]: 2026-04-13 19:21:17.344 [INFO][4840] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f1907663454aac4b795da0e2f665582c014f45ed245f0bfe02b32e7ddc52781a" Namespace="kube-system" Pod="coredns-66bc5c9577-4mf58" WorkloadEndpoint="ci--4081--3--7--b--7ea64c4796-k8s-coredns--66bc5c9577--4mf58-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--b--7ea64c4796-k8s-coredns--66bc5c9577--4mf58-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"37df8aee-778a-40ce-bb48-a75145d05544", ResourceVersion:"972", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 19, 20, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-b-7ea64c4796", ContainerID:"f1907663454aac4b795da0e2f665582c014f45ed245f0bfe02b32e7ddc52781a", Pod:"coredns-66bc5c9577-4mf58", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.62.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4cf04e948ca", MAC:"da:73:87:4a:cc:92", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 19:21:17.392995 containerd[1489]: 2026-04-13 19:21:17.381 [INFO][4840] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f1907663454aac4b795da0e2f665582c014f45ed245f0bfe02b32e7ddc52781a" Namespace="kube-system" Pod="coredns-66bc5c9577-4mf58" WorkloadEndpoint="ci--4081--3--7--b--7ea64c4796-k8s-coredns--66bc5c9577--4mf58-eth0" Apr 13 19:21:17.444412 containerd[1489]: time="2026-04-13T19:21:17.441980236Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 19:21:17.444412 containerd[1489]: time="2026-04-13T19:21:17.442093133Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 19:21:17.444412 containerd[1489]: time="2026-04-13T19:21:17.442123178Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:21:17.444412 containerd[1489]: time="2026-04-13T19:21:17.442221313Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:21:17.449599 systemd-networkd[1382]: calia6f6daaf5e7: Link UP Apr 13 19:21:17.450296 systemd-networkd[1382]: calia6f6daaf5e7: Gained carrier Apr 13 19:21:17.497991 containerd[1489]: 2026-04-13 19:21:17.104 [INFO][4850] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--7--b--7ea64c4796-k8s-csi--node--driver--t82fb-eth0 csi-node-driver- calico-system 4e1c37d2-8528-4b9f-9fc1-29276f518f72 973 0 2026-04-13 19:20:44 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:98cbb5577 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081-3-7-b-7ea64c4796 csi-node-driver-t82fb eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calia6f6daaf5e7 [] [] }} ContainerID="0528a911a5b463b9947405de3cbe35c47e7f5f05c17dcd7390c2b28a0d47e6d3" Namespace="calico-system" Pod="csi-node-driver-t82fb" WorkloadEndpoint="ci--4081--3--7--b--7ea64c4796-k8s-csi--node--driver--t82fb-" Apr 13 19:21:17.497991 containerd[1489]: 2026-04-13 19:21:17.104 [INFO][4850] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0528a911a5b463b9947405de3cbe35c47e7f5f05c17dcd7390c2b28a0d47e6d3" Namespace="calico-system" Pod="csi-node-driver-t82fb" WorkloadEndpoint="ci--4081--3--7--b--7ea64c4796-k8s-csi--node--driver--t82fb-eth0" Apr 13 19:21:17.497991 containerd[1489]: 2026-04-13 19:21:17.229 [INFO][4878] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0528a911a5b463b9947405de3cbe35c47e7f5f05c17dcd7390c2b28a0d47e6d3" HandleID="k8s-pod-network.0528a911a5b463b9947405de3cbe35c47e7f5f05c17dcd7390c2b28a0d47e6d3" Workload="ci--4081--3--7--b--7ea64c4796-k8s-csi--node--driver--t82fb-eth0" Apr 13 19:21:17.497991 containerd[1489]: 2026-04-13 19:21:17.266 [INFO][4878] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="0528a911a5b463b9947405de3cbe35c47e7f5f05c17dcd7390c2b28a0d47e6d3" HandleID="k8s-pod-network.0528a911a5b463b9947405de3cbe35c47e7f5f05c17dcd7390c2b28a0d47e6d3" Workload="ci--4081--3--7--b--7ea64c4796-k8s-csi--node--driver--t82fb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002fb230), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-7-b-7ea64c4796", "pod":"csi-node-driver-t82fb", "timestamp":"2026-04-13 19:21:17.229433808 +0000 UTC"}, Hostname:"ci-4081-3-7-b-7ea64c4796", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40001862c0)} Apr 13 19:21:17.497991 containerd[1489]: 2026-04-13 19:21:17.266 [INFO][4878] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 19:21:17.497991 containerd[1489]: 2026-04-13 19:21:17.314 [INFO][4878] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 19:21:17.497991 containerd[1489]: 2026-04-13 19:21:17.314 [INFO][4878] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-7-b-7ea64c4796' Apr 13 19:21:17.497991 containerd[1489]: 2026-04-13 19:21:17.332 [INFO][4878] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.0528a911a5b463b9947405de3cbe35c47e7f5f05c17dcd7390c2b28a0d47e6d3" host="ci-4081-3-7-b-7ea64c4796" Apr 13 19:21:17.497991 containerd[1489]: 2026-04-13 19:21:17.347 [INFO][4878] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081-3-7-b-7ea64c4796" Apr 13 19:21:17.497991 containerd[1489]: 2026-04-13 19:21:17.371 [INFO][4878] ipam/ipam.go 526: Trying affinity for 192.168.62.128/26 host="ci-4081-3-7-b-7ea64c4796" Apr 13 19:21:17.497991 containerd[1489]: 2026-04-13 19:21:17.381 [INFO][4878] ipam/ipam.go 160: Attempting to load block cidr=192.168.62.128/26 host="ci-4081-3-7-b-7ea64c4796" Apr 13 19:21:17.497991 containerd[1489]: 2026-04-13 19:21:17.394 [INFO][4878] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.62.128/26 host="ci-4081-3-7-b-7ea64c4796" Apr 13 19:21:17.497991 containerd[1489]: 2026-04-13 19:21:17.394 [INFO][4878] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.62.128/26 handle="k8s-pod-network.0528a911a5b463b9947405de3cbe35c47e7f5f05c17dcd7390c2b28a0d47e6d3" host="ci-4081-3-7-b-7ea64c4796" Apr 13 19:21:17.497991 containerd[1489]: 2026-04-13 19:21:17.400 [INFO][4878] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.0528a911a5b463b9947405de3cbe35c47e7f5f05c17dcd7390c2b28a0d47e6d3 Apr 13 19:21:17.497991 containerd[1489]: 2026-04-13 19:21:17.413 [INFO][4878] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.62.128/26 handle="k8s-pod-network.0528a911a5b463b9947405de3cbe35c47e7f5f05c17dcd7390c2b28a0d47e6d3" host="ci-4081-3-7-b-7ea64c4796" Apr 13 19:21:17.497991 containerd[1489]: 2026-04-13 19:21:17.431 [INFO][4878] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.62.135/26] block=192.168.62.128/26 handle="k8s-pod-network.0528a911a5b463b9947405de3cbe35c47e7f5f05c17dcd7390c2b28a0d47e6d3" host="ci-4081-3-7-b-7ea64c4796" Apr 13 19:21:17.497991 containerd[1489]: 2026-04-13 19:21:17.431 [INFO][4878] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.62.135/26] handle="k8s-pod-network.0528a911a5b463b9947405de3cbe35c47e7f5f05c17dcd7390c2b28a0d47e6d3" host="ci-4081-3-7-b-7ea64c4796" Apr 13 19:21:17.497991 containerd[1489]: 2026-04-13 19:21:17.431 [INFO][4878] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 19:21:17.497991 containerd[1489]: 2026-04-13 19:21:17.431 [INFO][4878] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.62.135/26] IPv6=[] ContainerID="0528a911a5b463b9947405de3cbe35c47e7f5f05c17dcd7390c2b28a0d47e6d3" HandleID="k8s-pod-network.0528a911a5b463b9947405de3cbe35c47e7f5f05c17dcd7390c2b28a0d47e6d3" Workload="ci--4081--3--7--b--7ea64c4796-k8s-csi--node--driver--t82fb-eth0" Apr 13 19:21:17.498621 containerd[1489]: 2026-04-13 19:21:17.435 [INFO][4850] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0528a911a5b463b9947405de3cbe35c47e7f5f05c17dcd7390c2b28a0d47e6d3" Namespace="calico-system" Pod="csi-node-driver-t82fb" WorkloadEndpoint="ci--4081--3--7--b--7ea64c4796-k8s-csi--node--driver--t82fb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--b--7ea64c4796-k8s-csi--node--driver--t82fb-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"4e1c37d2-8528-4b9f-9fc1-29276f518f72", ResourceVersion:"973", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 19, 20, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"98cbb5577", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-b-7ea64c4796", ContainerID:"", Pod:"csi-node-driver-t82fb", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.62.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calia6f6daaf5e7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 19:21:17.498621 containerd[1489]: 2026-04-13 19:21:17.435 [INFO][4850] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.62.135/32] ContainerID="0528a911a5b463b9947405de3cbe35c47e7f5f05c17dcd7390c2b28a0d47e6d3" Namespace="calico-system" Pod="csi-node-driver-t82fb" WorkloadEndpoint="ci--4081--3--7--b--7ea64c4796-k8s-csi--node--driver--t82fb-eth0" Apr 13 19:21:17.498621 containerd[1489]: 2026-04-13 19:21:17.435 [INFO][4850] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia6f6daaf5e7 ContainerID="0528a911a5b463b9947405de3cbe35c47e7f5f05c17dcd7390c2b28a0d47e6d3" Namespace="calico-system" Pod="csi-node-driver-t82fb" WorkloadEndpoint="ci--4081--3--7--b--7ea64c4796-k8s-csi--node--driver--t82fb-eth0" Apr 13 19:21:17.498621 containerd[1489]: 2026-04-13 19:21:17.458 [INFO][4850] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0528a911a5b463b9947405de3cbe35c47e7f5f05c17dcd7390c2b28a0d47e6d3" Namespace="calico-system" Pod="csi-node-driver-t82fb" WorkloadEndpoint="ci--4081--3--7--b--7ea64c4796-k8s-csi--node--driver--t82fb-eth0" Apr 13 19:21:17.498621 containerd[1489]: 2026-04-13 19:21:17.465 [INFO][4850] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0528a911a5b463b9947405de3cbe35c47e7f5f05c17dcd7390c2b28a0d47e6d3" Namespace="calico-system" Pod="csi-node-driver-t82fb" WorkloadEndpoint="ci--4081--3--7--b--7ea64c4796-k8s-csi--node--driver--t82fb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--b--7ea64c4796-k8s-csi--node--driver--t82fb-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"4e1c37d2-8528-4b9f-9fc1-29276f518f72", ResourceVersion:"973", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 19, 20, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"98cbb5577", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-b-7ea64c4796", ContainerID:"0528a911a5b463b9947405de3cbe35c47e7f5f05c17dcd7390c2b28a0d47e6d3", Pod:"csi-node-driver-t82fb", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.62.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calia6f6daaf5e7", MAC:"ce:5c:d8:67:17:b6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 19:21:17.498621 containerd[1489]: 2026-04-13 19:21:17.488 [INFO][4850] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0528a911a5b463b9947405de3cbe35c47e7f5f05c17dcd7390c2b28a0d47e6d3" Namespace="calico-system" Pod="csi-node-driver-t82fb" WorkloadEndpoint="ci--4081--3--7--b--7ea64c4796-k8s-csi--node--driver--t82fb-eth0" Apr 13 19:21:17.512428 systemd[1]: Started cri-containerd-f1907663454aac4b795da0e2f665582c014f45ed245f0bfe02b32e7ddc52781a.scope - libcontainer container f1907663454aac4b795da0e2f665582c014f45ed245f0bfe02b32e7ddc52781a. Apr 13 19:21:17.555191 containerd[1489]: time="2026-04-13T19:21:17.553187815Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 19:21:17.555191 containerd[1489]: time="2026-04-13T19:21:17.553269307Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 19:21:17.555191 containerd[1489]: time="2026-04-13T19:21:17.553284630Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:21:17.555191 containerd[1489]: time="2026-04-13T19:21:17.553380524Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:21:17.595396 containerd[1489]: time="2026-04-13T19:21:17.594999763Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7cd4769649-d4nq8,Uid:f5fa3599-3a44-4a37-a228-2b86b422f5c0,Namespace:calico-system,Attempt:1,} returns sandbox id \"4107dd4f61f2bb04ce04e7a3b1440b14db75029f5c7d6ca0eec32784df96de24\"" Apr 13 19:21:17.598674 containerd[1489]: time="2026-04-13T19:21:17.598336907Z" level=info msg="StopPodSandbox for \"b11cd43a0dab65ed7876dc9388adefbcef863ead095672f7bdd7daa57d5b1469\"" Apr 13 19:21:17.603032 systemd[1]: Started cri-containerd-0528a911a5b463b9947405de3cbe35c47e7f5f05c17dcd7390c2b28a0d47e6d3.scope - libcontainer container 0528a911a5b463b9947405de3cbe35c47e7f5f05c17dcd7390c2b28a0d47e6d3. Apr 13 19:21:17.625180 containerd[1489]: time="2026-04-13T19:21:17.624979447Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-4mf58,Uid:37df8aee-778a-40ce-bb48-a75145d05544,Namespace:kube-system,Attempt:1,} returns sandbox id \"f1907663454aac4b795da0e2f665582c014f45ed245f0bfe02b32e7ddc52781a\"" Apr 13 19:21:17.637753 containerd[1489]: time="2026-04-13T19:21:17.637648838Z" level=info msg="CreateContainer within sandbox \"f1907663454aac4b795da0e2f665582c014f45ed245f0bfe02b32e7ddc52781a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 13 19:21:17.666828 containerd[1489]: time="2026-04-13T19:21:17.666773352Z" level=info msg="CreateContainer within sandbox \"f1907663454aac4b795da0e2f665582c014f45ed245f0bfe02b32e7ddc52781a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"9e0d3b2ef2d47558edd68f7926de9bf5d450a5617c086079d94c97121819cbef\"" Apr 13 19:21:17.670344 containerd[1489]: time="2026-04-13T19:21:17.669551892Z" level=info msg="StartContainer for \"9e0d3b2ef2d47558edd68f7926de9bf5d450a5617c086079d94c97121819cbef\"" Apr 13 19:21:17.762717 containerd[1489]: time="2026-04-13T19:21:17.762560684Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-t82fb,Uid:4e1c37d2-8528-4b9f-9fc1-29276f518f72,Namespace:calico-system,Attempt:1,} returns sandbox id \"0528a911a5b463b9947405de3cbe35c47e7f5f05c17dcd7390c2b28a0d47e6d3\"" Apr 13 19:21:17.797210 systemd[1]: run-containerd-runc-k8s.io-9e0d3b2ef2d47558edd68f7926de9bf5d450a5617c086079d94c97121819cbef-runc.TkJG4Z.mount: Deactivated successfully. Apr 13 19:21:17.808960 systemd[1]: Started cri-containerd-9e0d3b2ef2d47558edd68f7926de9bf5d450a5617c086079d94c97121819cbef.scope - libcontainer container 9e0d3b2ef2d47558edd68f7926de9bf5d450a5617c086079d94c97121819cbef. Apr 13 19:21:17.851958 systemd-networkd[1382]: cali9510e29a13c: Gained IPv6LL Apr 13 19:21:17.889080 containerd[1489]: 2026-04-13 19:21:17.786 [INFO][5046] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="b11cd43a0dab65ed7876dc9388adefbcef863ead095672f7bdd7daa57d5b1469" Apr 13 19:21:17.889080 containerd[1489]: 2026-04-13 19:21:17.796 [INFO][5046] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b11cd43a0dab65ed7876dc9388adefbcef863ead095672f7bdd7daa57d5b1469" iface="eth0" netns="/var/run/netns/cni-5caa5a36-5d32-2fdf-b761-e32f9043bed9" Apr 13 19:21:17.889080 containerd[1489]: 2026-04-13 19:21:17.796 [INFO][5046] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b11cd43a0dab65ed7876dc9388adefbcef863ead095672f7bdd7daa57d5b1469" iface="eth0" netns="/var/run/netns/cni-5caa5a36-5d32-2fdf-b761-e32f9043bed9" Apr 13 19:21:17.889080 containerd[1489]: 2026-04-13 19:21:17.799 [INFO][5046] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="b11cd43a0dab65ed7876dc9388adefbcef863ead095672f7bdd7daa57d5b1469" iface="eth0" netns="/var/run/netns/cni-5caa5a36-5d32-2fdf-b761-e32f9043bed9" Apr 13 19:21:17.889080 containerd[1489]: 2026-04-13 19:21:17.799 [INFO][5046] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="b11cd43a0dab65ed7876dc9388adefbcef863ead095672f7bdd7daa57d5b1469" Apr 13 19:21:17.889080 containerd[1489]: 2026-04-13 19:21:17.799 [INFO][5046] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="b11cd43a0dab65ed7876dc9388adefbcef863ead095672f7bdd7daa57d5b1469" Apr 13 19:21:17.889080 containerd[1489]: 2026-04-13 19:21:17.833 [INFO][5084] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="b11cd43a0dab65ed7876dc9388adefbcef863ead095672f7bdd7daa57d5b1469" HandleID="k8s-pod-network.b11cd43a0dab65ed7876dc9388adefbcef863ead095672f7bdd7daa57d5b1469" Workload="ci--4081--3--7--b--7ea64c4796-k8s-calico--kube--controllers--658544489c--rrlcl-eth0" Apr 13 19:21:17.889080 containerd[1489]: 2026-04-13 19:21:17.834 [INFO][5084] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 19:21:17.889080 containerd[1489]: 2026-04-13 19:21:17.834 [INFO][5084] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 19:21:17.889080 containerd[1489]: 2026-04-13 19:21:17.848 [WARNING][5084] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="b11cd43a0dab65ed7876dc9388adefbcef863ead095672f7bdd7daa57d5b1469" HandleID="k8s-pod-network.b11cd43a0dab65ed7876dc9388adefbcef863ead095672f7bdd7daa57d5b1469" Workload="ci--4081--3--7--b--7ea64c4796-k8s-calico--kube--controllers--658544489c--rrlcl-eth0" Apr 13 19:21:17.889080 containerd[1489]: 2026-04-13 19:21:17.848 [INFO][5084] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="b11cd43a0dab65ed7876dc9388adefbcef863ead095672f7bdd7daa57d5b1469" HandleID="k8s-pod-network.b11cd43a0dab65ed7876dc9388adefbcef863ead095672f7bdd7daa57d5b1469" Workload="ci--4081--3--7--b--7ea64c4796-k8s-calico--kube--controllers--658544489c--rrlcl-eth0" Apr 13 19:21:17.889080 containerd[1489]: 2026-04-13 19:21:17.858 [INFO][5084] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 19:21:17.889080 containerd[1489]: 2026-04-13 19:21:17.872 [INFO][5046] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="b11cd43a0dab65ed7876dc9388adefbcef863ead095672f7bdd7daa57d5b1469" Apr 13 19:21:17.889080 containerd[1489]: time="2026-04-13T19:21:17.886859038Z" level=info msg="TearDown network for sandbox \"b11cd43a0dab65ed7876dc9388adefbcef863ead095672f7bdd7daa57d5b1469\" successfully" Apr 13 19:21:17.889080 containerd[1489]: time="2026-04-13T19:21:17.886906645Z" level=info msg="StopPodSandbox for \"b11cd43a0dab65ed7876dc9388adefbcef863ead095672f7bdd7daa57d5b1469\" returns successfully" Apr 13 19:21:17.891201 systemd[1]: run-netns-cni\x2d5caa5a36\x2d5d32\x2d2fdf\x2db761\x2de32f9043bed9.mount: Deactivated successfully. Apr 13 19:21:17.895864 containerd[1489]: time="2026-04-13T19:21:17.895815109Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-658544489c-rrlcl,Uid:f07ad544-5f54-4fa0-9ff2-3bd524dd2c67,Namespace:calico-system,Attempt:1,}" Apr 13 19:21:17.909803 containerd[1489]: time="2026-04-13T19:21:17.908962973Z" level=info msg="StartContainer for \"9e0d3b2ef2d47558edd68f7926de9bf5d450a5617c086079d94c97121819cbef\" returns successfully" Apr 13 19:21:17.926643 containerd[1489]: time="2026-04-13T19:21:17.926566909Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:21:17.929646 containerd[1489]: time="2026-04-13T19:21:17.929601087Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.31.4: active requests=0, bytes read=51613980" Apr 13 19:21:17.935836 containerd[1489]: time="2026-04-13T19:21:17.934758505Z" level=info msg="ImageCreate event name:\"sha256:5274e98e9b12badfa0d6f106814630212e6de7abb8deaf896423b13e6ebdb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:21:17.946211 containerd[1489]: time="2026-04-13T19:21:17.946096096Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" with image id \"sha256:5274e98e9b12badfa0d6f106814630212e6de7abb8deaf896423b13e6ebdb41b\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\", size \"51613826\" in 3.891872895s" Apr 13 19:21:17.946211 containerd[1489]: time="2026-04-13T19:21:17.946191230Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" returns image reference \"sha256:5274e98e9b12badfa0d6f106814630212e6de7abb8deaf896423b13e6ebdb41b\"" Apr 13 19:21:17.947321 containerd[1489]: time="2026-04-13T19:21:17.946675463Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:21:17.949333 containerd[1489]: time="2026-04-13T19:21:17.949182041Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Apr 13 19:21:17.953620 containerd[1489]: time="2026-04-13T19:21:17.952446494Z" level=info msg="CreateContainer within sandbox \"b012b6b71737554071412c482654841ca23d7e39e3567b38451653b33a616bac\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Apr 13 19:21:17.970666 containerd[1489]: time="2026-04-13T19:21:17.970591271Z" level=info msg="CreateContainer within sandbox \"b012b6b71737554071412c482654841ca23d7e39e3567b38451653b33a616bac\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"0676f1eea66813791d8d28aae7c13c4ba6af1e9b3268af310019bbc60db58845\"" Apr 13 19:21:17.973390 containerd[1489]: time="2026-04-13T19:21:17.971430998Z" level=info msg="StartContainer for \"0676f1eea66813791d8d28aae7c13c4ba6af1e9b3268af310019bbc60db58845\"" Apr 13 19:21:18.030846 systemd[1]: Started cri-containerd-0676f1eea66813791d8d28aae7c13c4ba6af1e9b3268af310019bbc60db58845.scope - libcontainer container 0676f1eea66813791d8d28aae7c13c4ba6af1e9b3268af310019bbc60db58845. Apr 13 19:21:18.073442 kubelet[2589]: I0413 19:21:18.073378 2589 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-4mf58" podStartSLOduration=49.073359307 podStartE2EDuration="49.073359307s" podCreationTimestamp="2026-04-13 19:20:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-13 19:21:18.072864194 +0000 UTC m=+54.636575400" watchObservedRunningTime="2026-04-13 19:21:18.073359307 +0000 UTC m=+54.637070433" Apr 13 19:21:18.231863 containerd[1489]: time="2026-04-13T19:21:18.231306603Z" level=info msg="StartContainer for \"0676f1eea66813791d8d28aae7c13c4ba6af1e9b3268af310019bbc60db58845\" returns successfully" Apr 13 19:21:18.239803 systemd-networkd[1382]: cali1f010a0e420: Link UP Apr 13 19:21:18.242632 systemd-networkd[1382]: cali1f010a0e420: Gained carrier Apr 13 19:21:18.261901 containerd[1489]: 2026-04-13 19:21:18.064 [INFO][5118] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--7--b--7ea64c4796-k8s-calico--kube--controllers--658544489c--rrlcl-eth0 calico-kube-controllers-658544489c- calico-system f07ad544-5f54-4fa0-9ff2-3bd524dd2c67 991 0 2026-04-13 19:20:44 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:658544489c projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081-3-7-b-7ea64c4796 calico-kube-controllers-658544489c-rrlcl eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali1f010a0e420 [] [] }} ContainerID="7555a22d23fb9599136ed10ac5edce62f1c034604af0f1c73f254d34335ce2b7" Namespace="calico-system" Pod="calico-kube-controllers-658544489c-rrlcl" WorkloadEndpoint="ci--4081--3--7--b--7ea64c4796-k8s-calico--kube--controllers--658544489c--rrlcl-" Apr 13 19:21:18.261901 containerd[1489]: 2026-04-13 19:21:18.065 [INFO][5118] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7555a22d23fb9599136ed10ac5edce62f1c034604af0f1c73f254d34335ce2b7" Namespace="calico-system" Pod="calico-kube-controllers-658544489c-rrlcl" WorkloadEndpoint="ci--4081--3--7--b--7ea64c4796-k8s-calico--kube--controllers--658544489c--rrlcl-eth0" Apr 13 19:21:18.261901 containerd[1489]: 2026-04-13 19:21:18.149 [INFO][5157] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7555a22d23fb9599136ed10ac5edce62f1c034604af0f1c73f254d34335ce2b7" HandleID="k8s-pod-network.7555a22d23fb9599136ed10ac5edce62f1c034604af0f1c73f254d34335ce2b7" Workload="ci--4081--3--7--b--7ea64c4796-k8s-calico--kube--controllers--658544489c--rrlcl-eth0" Apr 13 19:21:18.261901 containerd[1489]: 2026-04-13 19:21:18.174 [INFO][5157] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="7555a22d23fb9599136ed10ac5edce62f1c034604af0f1c73f254d34335ce2b7" HandleID="k8s-pod-network.7555a22d23fb9599136ed10ac5edce62f1c034604af0f1c73f254d34335ce2b7" Workload="ci--4081--3--7--b--7ea64c4796-k8s-calico--kube--controllers--658544489c--rrlcl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000103de0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-7-b-7ea64c4796", "pod":"calico-kube-controllers-658544489c-rrlcl", "timestamp":"2026-04-13 19:21:18.149639936 +0000 UTC"}, Hostname:"ci-4081-3-7-b-7ea64c4796", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x400002a420)} Apr 13 19:21:18.261901 containerd[1489]: 2026-04-13 19:21:18.174 [INFO][5157] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 19:21:18.261901 containerd[1489]: 2026-04-13 19:21:18.174 [INFO][5157] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 19:21:18.261901 containerd[1489]: 2026-04-13 19:21:18.174 [INFO][5157] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-7-b-7ea64c4796' Apr 13 19:21:18.261901 containerd[1489]: 2026-04-13 19:21:18.178 [INFO][5157] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.7555a22d23fb9599136ed10ac5edce62f1c034604af0f1c73f254d34335ce2b7" host="ci-4081-3-7-b-7ea64c4796" Apr 13 19:21:18.261901 containerd[1489]: 2026-04-13 19:21:18.187 [INFO][5157] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081-3-7-b-7ea64c4796" Apr 13 19:21:18.261901 containerd[1489]: 2026-04-13 19:21:18.195 [INFO][5157] ipam/ipam.go 526: Trying affinity for 192.168.62.128/26 host="ci-4081-3-7-b-7ea64c4796" Apr 13 19:21:18.261901 containerd[1489]: 2026-04-13 19:21:18.198 [INFO][5157] ipam/ipam.go 160: Attempting to load block cidr=192.168.62.128/26 host="ci-4081-3-7-b-7ea64c4796" Apr 13 19:21:18.261901 containerd[1489]: 2026-04-13 19:21:18.202 [INFO][5157] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.62.128/26 host="ci-4081-3-7-b-7ea64c4796" Apr 13 19:21:18.261901 containerd[1489]: 2026-04-13 19:21:18.203 [INFO][5157] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.62.128/26 handle="k8s-pod-network.7555a22d23fb9599136ed10ac5edce62f1c034604af0f1c73f254d34335ce2b7" host="ci-4081-3-7-b-7ea64c4796" Apr 13 19:21:18.261901 containerd[1489]: 2026-04-13 19:21:18.208 [INFO][5157] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.7555a22d23fb9599136ed10ac5edce62f1c034604af0f1c73f254d34335ce2b7 Apr 13 19:21:18.261901 containerd[1489]: 2026-04-13 19:21:18.216 [INFO][5157] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.62.128/26 handle="k8s-pod-network.7555a22d23fb9599136ed10ac5edce62f1c034604af0f1c73f254d34335ce2b7" host="ci-4081-3-7-b-7ea64c4796" Apr 13 19:21:18.261901 containerd[1489]: 2026-04-13 19:21:18.227 [INFO][5157] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.62.136/26] block=192.168.62.128/26 handle="k8s-pod-network.7555a22d23fb9599136ed10ac5edce62f1c034604af0f1c73f254d34335ce2b7" host="ci-4081-3-7-b-7ea64c4796" Apr 13 19:21:18.261901 containerd[1489]: 2026-04-13 19:21:18.228 [INFO][5157] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.62.136/26] handle="k8s-pod-network.7555a22d23fb9599136ed10ac5edce62f1c034604af0f1c73f254d34335ce2b7" host="ci-4081-3-7-b-7ea64c4796" Apr 13 19:21:18.261901 containerd[1489]: 2026-04-13 19:21:18.228 [INFO][5157] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 19:21:18.261901 containerd[1489]: 2026-04-13 19:21:18.228 [INFO][5157] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.62.136/26] IPv6=[] ContainerID="7555a22d23fb9599136ed10ac5edce62f1c034604af0f1c73f254d34335ce2b7" HandleID="k8s-pod-network.7555a22d23fb9599136ed10ac5edce62f1c034604af0f1c73f254d34335ce2b7" Workload="ci--4081--3--7--b--7ea64c4796-k8s-calico--kube--controllers--658544489c--rrlcl-eth0" Apr 13 19:21:18.262932 containerd[1489]: 2026-04-13 19:21:18.235 [INFO][5118] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7555a22d23fb9599136ed10ac5edce62f1c034604af0f1c73f254d34335ce2b7" Namespace="calico-system" Pod="calico-kube-controllers-658544489c-rrlcl" WorkloadEndpoint="ci--4081--3--7--b--7ea64c4796-k8s-calico--kube--controllers--658544489c--rrlcl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--b--7ea64c4796-k8s-calico--kube--controllers--658544489c--rrlcl-eth0", GenerateName:"calico-kube-controllers-658544489c-", Namespace:"calico-system", SelfLink:"", UID:"f07ad544-5f54-4fa0-9ff2-3bd524dd2c67", ResourceVersion:"991", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 19, 20, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"658544489c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-b-7ea64c4796", ContainerID:"", Pod:"calico-kube-controllers-658544489c-rrlcl", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.62.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali1f010a0e420", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 19:21:18.262932 containerd[1489]: 2026-04-13 19:21:18.235 [INFO][5118] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.62.136/32] ContainerID="7555a22d23fb9599136ed10ac5edce62f1c034604af0f1c73f254d34335ce2b7" Namespace="calico-system" Pod="calico-kube-controllers-658544489c-rrlcl" WorkloadEndpoint="ci--4081--3--7--b--7ea64c4796-k8s-calico--kube--controllers--658544489c--rrlcl-eth0" Apr 13 19:21:18.262932 containerd[1489]: 2026-04-13 19:21:18.235 [INFO][5118] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1f010a0e420 ContainerID="7555a22d23fb9599136ed10ac5edce62f1c034604af0f1c73f254d34335ce2b7" Namespace="calico-system" Pod="calico-kube-controllers-658544489c-rrlcl" WorkloadEndpoint="ci--4081--3--7--b--7ea64c4796-k8s-calico--kube--controllers--658544489c--rrlcl-eth0" Apr 13 19:21:18.262932 containerd[1489]: 2026-04-13 19:21:18.241 [INFO][5118] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7555a22d23fb9599136ed10ac5edce62f1c034604af0f1c73f254d34335ce2b7" Namespace="calico-system" Pod="calico-kube-controllers-658544489c-rrlcl" WorkloadEndpoint="ci--4081--3--7--b--7ea64c4796-k8s-calico--kube--controllers--658544489c--rrlcl-eth0" Apr 13 19:21:18.262932 containerd[1489]: 2026-04-13 19:21:18.241 [INFO][5118] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7555a22d23fb9599136ed10ac5edce62f1c034604af0f1c73f254d34335ce2b7" Namespace="calico-system" Pod="calico-kube-controllers-658544489c-rrlcl" WorkloadEndpoint="ci--4081--3--7--b--7ea64c4796-k8s-calico--kube--controllers--658544489c--rrlcl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--b--7ea64c4796-k8s-calico--kube--controllers--658544489c--rrlcl-eth0", GenerateName:"calico-kube-controllers-658544489c-", Namespace:"calico-system", SelfLink:"", UID:"f07ad544-5f54-4fa0-9ff2-3bd524dd2c67", ResourceVersion:"991", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 19, 20, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"658544489c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-b-7ea64c4796", ContainerID:"7555a22d23fb9599136ed10ac5edce62f1c034604af0f1c73f254d34335ce2b7", Pod:"calico-kube-controllers-658544489c-rrlcl", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.62.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali1f010a0e420", MAC:"be:79:93:0e:00:29", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 19:21:18.262932 containerd[1489]: 2026-04-13 19:21:18.257 [INFO][5118] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7555a22d23fb9599136ed10ac5edce62f1c034604af0f1c73f254d34335ce2b7" Namespace="calico-system" Pod="calico-kube-controllers-658544489c-rrlcl" WorkloadEndpoint="ci--4081--3--7--b--7ea64c4796-k8s-calico--kube--controllers--658544489c--rrlcl-eth0" Apr 13 19:21:18.304813 containerd[1489]: time="2026-04-13T19:21:18.303265512Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 19:21:18.304813 containerd[1489]: time="2026-04-13T19:21:18.303339282Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 19:21:18.304813 containerd[1489]: time="2026-04-13T19:21:18.303351404Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:21:18.304813 containerd[1489]: time="2026-04-13T19:21:18.303452819Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:21:18.329282 systemd[1]: Started cri-containerd-7555a22d23fb9599136ed10ac5edce62f1c034604af0f1c73f254d34335ce2b7.scope - libcontainer container 7555a22d23fb9599136ed10ac5edce62f1c034604af0f1c73f254d34335ce2b7. Apr 13 19:21:18.390388 containerd[1489]: time="2026-04-13T19:21:18.390195239Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-658544489c-rrlcl,Uid:f07ad544-5f54-4fa0-9ff2-3bd524dd2c67,Namespace:calico-system,Attempt:1,} returns sandbox id \"7555a22d23fb9599136ed10ac5edce62f1c034604af0f1c73f254d34335ce2b7\"" Apr 13 19:21:18.491367 systemd-networkd[1382]: calid36aebc3b14: Gained IPv6LL Apr 13 19:21:19.067375 systemd-networkd[1382]: calia6f6daaf5e7: Gained IPv6LL Apr 13 19:21:19.148969 systemd[1]: run-containerd-runc-k8s.io-0676f1eea66813791d8d28aae7c13c4ba6af1e9b3268af310019bbc60db58845-runc.auhUXM.mount: Deactivated successfully. Apr 13 19:21:19.387638 systemd-networkd[1382]: cali4cf04e948ca: Gained IPv6LL Apr 13 19:21:19.451559 systemd-networkd[1382]: cali1f010a0e420: Gained IPv6LL Apr 13 19:21:20.344052 containerd[1489]: time="2026-04-13T19:21:20.342317724Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:21:20.344052 containerd[1489]: time="2026-04-13T19:21:20.343961120Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=45552315" Apr 13 19:21:20.344886 containerd[1489]: time="2026-04-13T19:21:20.344827884Z" level=info msg="ImageCreate event name:\"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:21:20.348512 containerd[1489]: time="2026-04-13T19:21:20.348440763Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:21:20.349814 containerd[1489]: time="2026-04-13T19:21:20.349766873Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"46949856\" in 2.400530343s" Apr 13 19:21:20.349814 containerd[1489]: time="2026-04-13T19:21:20.349810719Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\"" Apr 13 19:21:20.352586 containerd[1489]: time="2026-04-13T19:21:20.352530190Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Apr 13 19:21:20.356891 containerd[1489]: time="2026-04-13T19:21:20.356836928Z" level=info msg="CreateContainer within sandbox \"1dc99eebfc8435881f5e6dd4dc716e971520eb6b4c8f5a33f098decfde571a5d\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 13 19:21:20.381529 containerd[1489]: time="2026-04-13T19:21:20.381442018Z" level=info msg="CreateContainer within sandbox \"1dc99eebfc8435881f5e6dd4dc716e971520eb6b4c8f5a33f098decfde571a5d\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"c9f40326c6a6413558b54b11e5da6f3f1295d4e58940435d88c2a6775a56b7d9\"" Apr 13 19:21:20.383446 containerd[1489]: time="2026-04-13T19:21:20.382408517Z" level=info msg="StartContainer for \"c9f40326c6a6413558b54b11e5da6f3f1295d4e58940435d88c2a6775a56b7d9\"" Apr 13 19:21:20.428276 systemd[1]: Started cri-containerd-c9f40326c6a6413558b54b11e5da6f3f1295d4e58940435d88c2a6775a56b7d9.scope - libcontainer container c9f40326c6a6413558b54b11e5da6f3f1295d4e58940435d88c2a6775a56b7d9. Apr 13 19:21:20.478248 containerd[1489]: time="2026-04-13T19:21:20.478155696Z" level=info msg="StartContainer for \"c9f40326c6a6413558b54b11e5da6f3f1295d4e58940435d88c2a6775a56b7d9\" returns successfully" Apr 13 19:21:20.815140 containerd[1489]: time="2026-04-13T19:21:20.815087003Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:21:20.820106 containerd[1489]: time="2026-04-13T19:21:20.820058876Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=77" Apr 13 19:21:20.822300 containerd[1489]: time="2026-04-13T19:21:20.822253191Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"46949856\" in 469.671074ms" Apr 13 19:21:20.822367 containerd[1489]: time="2026-04-13T19:21:20.822306038Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\"" Apr 13 19:21:20.825234 containerd[1489]: time="2026-04-13T19:21:20.825190652Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\"" Apr 13 19:21:20.836870 containerd[1489]: time="2026-04-13T19:21:20.836729148Z" level=info msg="CreateContainer within sandbox \"4107dd4f61f2bb04ce04e7a3b1440b14db75029f5c7d6ca0eec32784df96de24\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 13 19:21:20.855989 containerd[1489]: time="2026-04-13T19:21:20.855937384Z" level=info msg="CreateContainer within sandbox \"4107dd4f61f2bb04ce04e7a3b1440b14db75029f5c7d6ca0eec32784df96de24\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"f0bc75130e34f4e9947b96b224dd7c58eb61587f4ce6573eea419de345d890d3\"" Apr 13 19:21:20.856740 containerd[1489]: time="2026-04-13T19:21:20.856712495Z" level=info msg="StartContainer for \"f0bc75130e34f4e9947b96b224dd7c58eb61587f4ce6573eea419de345d890d3\"" Apr 13 19:21:20.892225 systemd[1]: Started cri-containerd-f0bc75130e34f4e9947b96b224dd7c58eb61587f4ce6573eea419de345d890d3.scope - libcontainer container f0bc75130e34f4e9947b96b224dd7c58eb61587f4ce6573eea419de345d890d3. Apr 13 19:21:20.945697 containerd[1489]: time="2026-04-13T19:21:20.945637735Z" level=info msg="StartContainer for \"f0bc75130e34f4e9947b96b224dd7c58eb61587f4ce6573eea419de345d890d3\" returns successfully" Apr 13 19:21:21.135795 kubelet[2589]: I0413 19:21:21.134591 2589 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-cccfbd5cf-tlznz" podStartSLOduration=35.239889714 podStartE2EDuration="39.134559436s" podCreationTimestamp="2026-04-13 19:20:42 +0000 UTC" firstStartedPulling="2026-04-13 19:21:14.053646589 +0000 UTC m=+50.617357755" lastFinishedPulling="2026-04-13 19:21:17.948316311 +0000 UTC m=+54.512027477" observedRunningTime="2026-04-13 19:21:19.123500452 +0000 UTC m=+55.687211618" watchObservedRunningTime="2026-04-13 19:21:21.134559436 +0000 UTC m=+57.698270642" Apr 13 19:21:21.135795 kubelet[2589]: I0413 19:21:21.134922 2589 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-7cd4769649-bxqw4" podStartSLOduration=35.99430181 podStartE2EDuration="40.134914406s" podCreationTimestamp="2026-04-13 19:20:41 +0000 UTC" firstStartedPulling="2026-04-13 19:21:16.211130761 +0000 UTC m=+52.774841927" lastFinishedPulling="2026-04-13 19:21:20.351743357 +0000 UTC m=+56.915454523" observedRunningTime="2026-04-13 19:21:21.134300239 +0000 UTC m=+57.698011405" watchObservedRunningTime="2026-04-13 19:21:21.134914406 +0000 UTC m=+57.698625572" Apr 13 19:21:22.119280 kubelet[2589]: I0413 19:21:22.119240 2589 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 13 19:21:22.492951 containerd[1489]: time="2026-04-13T19:21:22.491877752Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:21:22.492951 containerd[1489]: time="2026-04-13T19:21:22.492907336Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.31.4: active requests=0, bytes read=8261497" Apr 13 19:21:22.494939 containerd[1489]: time="2026-04-13T19:21:22.494856327Z" level=info msg="ImageCreate event name:\"sha256:9cb4086a1b408b52c6b14e0b81520060e1766ee0243508d29d8a53c7b518051f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:21:22.502760 containerd[1489]: time="2026-04-13T19:21:22.502674937Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:21:22.504833 containerd[1489]: time="2026-04-13T19:21:22.504611806Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.31.4\" with image id \"sha256:9cb4086a1b408b52c6b14e0b81520060e1766ee0243508d29d8a53c7b518051f\", repo tag \"ghcr.io/flatcar/calico/csi:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\", size \"9659022\" in 1.679377228s" Apr 13 19:21:22.504833 containerd[1489]: time="2026-04-13T19:21:22.504657733Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\" returns image reference \"sha256:9cb4086a1b408b52c6b14e0b81520060e1766ee0243508d29d8a53c7b518051f\"" Apr 13 19:21:22.509354 containerd[1489]: time="2026-04-13T19:21:22.509210047Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\"" Apr 13 19:21:22.514390 containerd[1489]: time="2026-04-13T19:21:22.514243588Z" level=info msg="CreateContainer within sandbox \"0528a911a5b463b9947405de3cbe35c47e7f5f05c17dcd7390c2b28a0d47e6d3\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Apr 13 19:21:22.551552 containerd[1489]: time="2026-04-13T19:21:22.551449331Z" level=info msg="CreateContainer within sandbox \"0528a911a5b463b9947405de3cbe35c47e7f5f05c17dcd7390c2b28a0d47e6d3\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"ceb7b7c8841eb629fdcdd7633958856e477214fefe1ed93d3d4718e7299c25f0\"" Apr 13 19:21:22.554347 containerd[1489]: time="2026-04-13T19:21:22.554053854Z" level=info msg="StartContainer for \"ceb7b7c8841eb629fdcdd7633958856e477214fefe1ed93d3d4718e7299c25f0\"" Apr 13 19:21:22.640244 systemd[1]: Started cri-containerd-ceb7b7c8841eb629fdcdd7633958856e477214fefe1ed93d3d4718e7299c25f0.scope - libcontainer container ceb7b7c8841eb629fdcdd7633958856e477214fefe1ed93d3d4718e7299c25f0. Apr 13 19:21:22.678458 containerd[1489]: time="2026-04-13T19:21:22.678305923Z" level=info msg="StartContainer for \"ceb7b7c8841eb629fdcdd7633958856e477214fefe1ed93d3d4718e7299c25f0\" returns successfully" Apr 13 19:21:23.580778 containerd[1489]: time="2026-04-13T19:21:23.580731175Z" level=info msg="StopPodSandbox for \"de15ca58bacce88132857c6397993b4f8e430764ca12d55bf7f802cb4853d684\"" Apr 13 19:21:23.733054 containerd[1489]: 2026-04-13 19:21:23.642 [WARNING][5451] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="de15ca58bacce88132857c6397993b4f8e430764ca12d55bf7f802cb4853d684" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--b--7ea64c4796-k8s-calico--apiserver--7cd4769649--d4nq8-eth0", GenerateName:"calico-apiserver-7cd4769649-", Namespace:"calico-system", SelfLink:"", UID:"f5fa3599-3a44-4a37-a228-2b86b422f5c0", ResourceVersion:"1026", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 19, 20, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7cd4769649", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-b-7ea64c4796", ContainerID:"4107dd4f61f2bb04ce04e7a3b1440b14db75029f5c7d6ca0eec32784df96de24", Pod:"calico-apiserver-7cd4769649-d4nq8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.62.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calid36aebc3b14", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 19:21:23.733054 containerd[1489]: 2026-04-13 19:21:23.642 [INFO][5451] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="de15ca58bacce88132857c6397993b4f8e430764ca12d55bf7f802cb4853d684" Apr 13 19:21:23.733054 containerd[1489]: 2026-04-13 19:21:23.642 [INFO][5451] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="de15ca58bacce88132857c6397993b4f8e430764ca12d55bf7f802cb4853d684" iface="eth0" netns="" Apr 13 19:21:23.733054 containerd[1489]: 2026-04-13 19:21:23.642 [INFO][5451] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="de15ca58bacce88132857c6397993b4f8e430764ca12d55bf7f802cb4853d684" Apr 13 19:21:23.733054 containerd[1489]: 2026-04-13 19:21:23.642 [INFO][5451] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="de15ca58bacce88132857c6397993b4f8e430764ca12d55bf7f802cb4853d684" Apr 13 19:21:23.733054 containerd[1489]: 2026-04-13 19:21:23.673 [INFO][5460] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="de15ca58bacce88132857c6397993b4f8e430764ca12d55bf7f802cb4853d684" HandleID="k8s-pod-network.de15ca58bacce88132857c6397993b4f8e430764ca12d55bf7f802cb4853d684" Workload="ci--4081--3--7--b--7ea64c4796-k8s-calico--apiserver--7cd4769649--d4nq8-eth0" Apr 13 19:21:23.733054 containerd[1489]: 2026-04-13 19:21:23.673 [INFO][5460] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 19:21:23.733054 containerd[1489]: 2026-04-13 19:21:23.673 [INFO][5460] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 19:21:23.733054 containerd[1489]: 2026-04-13 19:21:23.716 [WARNING][5460] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="de15ca58bacce88132857c6397993b4f8e430764ca12d55bf7f802cb4853d684" HandleID="k8s-pod-network.de15ca58bacce88132857c6397993b4f8e430764ca12d55bf7f802cb4853d684" Workload="ci--4081--3--7--b--7ea64c4796-k8s-calico--apiserver--7cd4769649--d4nq8-eth0" Apr 13 19:21:23.733054 containerd[1489]: 2026-04-13 19:21:23.716 [INFO][5460] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="de15ca58bacce88132857c6397993b4f8e430764ca12d55bf7f802cb4853d684" HandleID="k8s-pod-network.de15ca58bacce88132857c6397993b4f8e430764ca12d55bf7f802cb4853d684" Workload="ci--4081--3--7--b--7ea64c4796-k8s-calico--apiserver--7cd4769649--d4nq8-eth0" Apr 13 19:21:23.733054 containerd[1489]: 2026-04-13 19:21:23.723 [INFO][5460] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 19:21:23.733054 containerd[1489]: 2026-04-13 19:21:23.728 [INFO][5451] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="de15ca58bacce88132857c6397993b4f8e430764ca12d55bf7f802cb4853d684" Apr 13 19:21:23.733573 containerd[1489]: time="2026-04-13T19:21:23.733086950Z" level=info msg="TearDown network for sandbox \"de15ca58bacce88132857c6397993b4f8e430764ca12d55bf7f802cb4853d684\" successfully" Apr 13 19:21:23.733573 containerd[1489]: time="2026-04-13T19:21:23.733112793Z" level=info msg="StopPodSandbox for \"de15ca58bacce88132857c6397993b4f8e430764ca12d55bf7f802cb4853d684\" returns successfully" Apr 13 19:21:23.734023 containerd[1489]: time="2026-04-13T19:21:23.733971871Z" level=info msg="RemovePodSandbox for \"de15ca58bacce88132857c6397993b4f8e430764ca12d55bf7f802cb4853d684\"" Apr 13 19:21:23.744600 containerd[1489]: time="2026-04-13T19:21:23.744467633Z" level=info msg="Forcibly stopping sandbox \"de15ca58bacce88132857c6397993b4f8e430764ca12d55bf7f802cb4853d684\"" Apr 13 19:21:23.898177 containerd[1489]: 2026-04-13 19:21:23.818 [WARNING][5475] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="de15ca58bacce88132857c6397993b4f8e430764ca12d55bf7f802cb4853d684" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--b--7ea64c4796-k8s-calico--apiserver--7cd4769649--d4nq8-eth0", GenerateName:"calico-apiserver-7cd4769649-", Namespace:"calico-system", SelfLink:"", UID:"f5fa3599-3a44-4a37-a228-2b86b422f5c0", ResourceVersion:"1026", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 19, 20, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7cd4769649", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-b-7ea64c4796", ContainerID:"4107dd4f61f2bb04ce04e7a3b1440b14db75029f5c7d6ca0eec32784df96de24", Pod:"calico-apiserver-7cd4769649-d4nq8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.62.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calid36aebc3b14", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 19:21:23.898177 containerd[1489]: 2026-04-13 19:21:23.819 [INFO][5475] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="de15ca58bacce88132857c6397993b4f8e430764ca12d55bf7f802cb4853d684" Apr 13 19:21:23.898177 containerd[1489]: 2026-04-13 19:21:23.819 [INFO][5475] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="de15ca58bacce88132857c6397993b4f8e430764ca12d55bf7f802cb4853d684" iface="eth0" netns="" Apr 13 19:21:23.898177 containerd[1489]: 2026-04-13 19:21:23.819 [INFO][5475] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="de15ca58bacce88132857c6397993b4f8e430764ca12d55bf7f802cb4853d684" Apr 13 19:21:23.898177 containerd[1489]: 2026-04-13 19:21:23.819 [INFO][5475] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="de15ca58bacce88132857c6397993b4f8e430764ca12d55bf7f802cb4853d684" Apr 13 19:21:23.898177 containerd[1489]: 2026-04-13 19:21:23.856 [INFO][5482] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="de15ca58bacce88132857c6397993b4f8e430764ca12d55bf7f802cb4853d684" HandleID="k8s-pod-network.de15ca58bacce88132857c6397993b4f8e430764ca12d55bf7f802cb4853d684" Workload="ci--4081--3--7--b--7ea64c4796-k8s-calico--apiserver--7cd4769649--d4nq8-eth0" Apr 13 19:21:23.898177 containerd[1489]: 2026-04-13 19:21:23.856 [INFO][5482] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 19:21:23.898177 containerd[1489]: 2026-04-13 19:21:23.856 [INFO][5482] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 19:21:23.898177 containerd[1489]: 2026-04-13 19:21:23.872 [WARNING][5482] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="de15ca58bacce88132857c6397993b4f8e430764ca12d55bf7f802cb4853d684" HandleID="k8s-pod-network.de15ca58bacce88132857c6397993b4f8e430764ca12d55bf7f802cb4853d684" Workload="ci--4081--3--7--b--7ea64c4796-k8s-calico--apiserver--7cd4769649--d4nq8-eth0" Apr 13 19:21:23.898177 containerd[1489]: 2026-04-13 19:21:23.872 [INFO][5482] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="de15ca58bacce88132857c6397993b4f8e430764ca12d55bf7f802cb4853d684" HandleID="k8s-pod-network.de15ca58bacce88132857c6397993b4f8e430764ca12d55bf7f802cb4853d684" Workload="ci--4081--3--7--b--7ea64c4796-k8s-calico--apiserver--7cd4769649--d4nq8-eth0" Apr 13 19:21:23.898177 containerd[1489]: 2026-04-13 19:21:23.876 [INFO][5482] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 19:21:23.898177 containerd[1489]: 2026-04-13 19:21:23.887 [INFO][5475] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="de15ca58bacce88132857c6397993b4f8e430764ca12d55bf7f802cb4853d684" Apr 13 19:21:23.898177 containerd[1489]: time="2026-04-13T19:21:23.897063121Z" level=info msg="TearDown network for sandbox \"de15ca58bacce88132857c6397993b4f8e430764ca12d55bf7f802cb4853d684\" successfully" Apr 13 19:21:23.903816 kubelet[2589]: I0413 19:21:23.903040 2589 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-7cd4769649-d4nq8" podStartSLOduration=39.692042976 podStartE2EDuration="42.902989295s" podCreationTimestamp="2026-04-13 19:20:41 +0000 UTC" firstStartedPulling="2026-04-13 19:21:17.61240599 +0000 UTC m=+54.176117156" lastFinishedPulling="2026-04-13 19:21:20.823352309 +0000 UTC m=+57.387063475" observedRunningTime="2026-04-13 19:21:21.157424667 +0000 UTC m=+57.721135833" watchObservedRunningTime="2026-04-13 19:21:23.902989295 +0000 UTC m=+60.466700461" Apr 13 19:21:23.908838 containerd[1489]: time="2026-04-13T19:21:23.907916692Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"de15ca58bacce88132857c6397993b4f8e430764ca12d55bf7f802cb4853d684\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 13 19:21:23.908838 containerd[1489]: time="2026-04-13T19:21:23.908388877Z" level=info msg="RemovePodSandbox \"de15ca58bacce88132857c6397993b4f8e430764ca12d55bf7f802cb4853d684\" returns successfully" Apr 13 19:21:23.912272 containerd[1489]: time="2026-04-13T19:21:23.912126071Z" level=info msg="StopPodSandbox for \"5b60475e7466061d777dc0f1b92a6ca6b85a2cfaf724cd0a86b1bd19b5927131\"" Apr 13 19:21:24.084183 containerd[1489]: 2026-04-13 19:21:24.020 [WARNING][5502] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5b60475e7466061d777dc0f1b92a6ca6b85a2cfaf724cd0a86b1bd19b5927131" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--b--7ea64c4796-k8s-calico--apiserver--7cd4769649--bxqw4-eth0", GenerateName:"calico-apiserver-7cd4769649-", Namespace:"calico-system", SelfLink:"", UID:"a8e45c92-17a2-4c83-ac2b-aa6192ba9a42", ResourceVersion:"1023", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 19, 20, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7cd4769649", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-b-7ea64c4796", ContainerID:"1dc99eebfc8435881f5e6dd4dc716e971520eb6b4c8f5a33f098decfde571a5d", Pod:"calico-apiserver-7cd4769649-bxqw4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.62.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali9510e29a13c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 19:21:24.084183 containerd[1489]: 2026-04-13 19:21:24.020 [INFO][5502] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="5b60475e7466061d777dc0f1b92a6ca6b85a2cfaf724cd0a86b1bd19b5927131" Apr 13 19:21:24.084183 containerd[1489]: 2026-04-13 19:21:24.020 [INFO][5502] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5b60475e7466061d777dc0f1b92a6ca6b85a2cfaf724cd0a86b1bd19b5927131" iface="eth0" netns="" Apr 13 19:21:24.084183 containerd[1489]: 2026-04-13 19:21:24.021 [INFO][5502] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="5b60475e7466061d777dc0f1b92a6ca6b85a2cfaf724cd0a86b1bd19b5927131" Apr 13 19:21:24.084183 containerd[1489]: 2026-04-13 19:21:24.021 [INFO][5502] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="5b60475e7466061d777dc0f1b92a6ca6b85a2cfaf724cd0a86b1bd19b5927131" Apr 13 19:21:24.084183 containerd[1489]: 2026-04-13 19:21:24.059 [INFO][5509] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="5b60475e7466061d777dc0f1b92a6ca6b85a2cfaf724cd0a86b1bd19b5927131" HandleID="k8s-pod-network.5b60475e7466061d777dc0f1b92a6ca6b85a2cfaf724cd0a86b1bd19b5927131" Workload="ci--4081--3--7--b--7ea64c4796-k8s-calico--apiserver--7cd4769649--bxqw4-eth0" Apr 13 19:21:24.084183 containerd[1489]: 2026-04-13 19:21:24.059 [INFO][5509] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 19:21:24.084183 containerd[1489]: 2026-04-13 19:21:24.059 [INFO][5509] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 19:21:24.084183 containerd[1489]: 2026-04-13 19:21:24.073 [WARNING][5509] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="5b60475e7466061d777dc0f1b92a6ca6b85a2cfaf724cd0a86b1bd19b5927131" HandleID="k8s-pod-network.5b60475e7466061d777dc0f1b92a6ca6b85a2cfaf724cd0a86b1bd19b5927131" Workload="ci--4081--3--7--b--7ea64c4796-k8s-calico--apiserver--7cd4769649--bxqw4-eth0" Apr 13 19:21:24.084183 containerd[1489]: 2026-04-13 19:21:24.073 [INFO][5509] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="5b60475e7466061d777dc0f1b92a6ca6b85a2cfaf724cd0a86b1bd19b5927131" HandleID="k8s-pod-network.5b60475e7466061d777dc0f1b92a6ca6b85a2cfaf724cd0a86b1bd19b5927131" Workload="ci--4081--3--7--b--7ea64c4796-k8s-calico--apiserver--7cd4769649--bxqw4-eth0" Apr 13 19:21:24.084183 containerd[1489]: 2026-04-13 19:21:24.076 [INFO][5509] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 19:21:24.084183 containerd[1489]: 2026-04-13 19:21:24.081 [INFO][5502] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="5b60475e7466061d777dc0f1b92a6ca6b85a2cfaf724cd0a86b1bd19b5927131" Apr 13 19:21:24.086152 containerd[1489]: time="2026-04-13T19:21:24.084250814Z" level=info msg="TearDown network for sandbox \"5b60475e7466061d777dc0f1b92a6ca6b85a2cfaf724cd0a86b1bd19b5927131\" successfully" Apr 13 19:21:24.086152 containerd[1489]: time="2026-04-13T19:21:24.084291059Z" level=info msg="StopPodSandbox for \"5b60475e7466061d777dc0f1b92a6ca6b85a2cfaf724cd0a86b1bd19b5927131\" returns successfully" Apr 13 19:21:24.086821 containerd[1489]: time="2026-04-13T19:21:24.086412827Z" level=info msg="RemovePodSandbox for \"5b60475e7466061d777dc0f1b92a6ca6b85a2cfaf724cd0a86b1bd19b5927131\"" Apr 13 19:21:24.086821 containerd[1489]: time="2026-04-13T19:21:24.086459193Z" level=info msg="Forcibly stopping sandbox \"5b60475e7466061d777dc0f1b92a6ca6b85a2cfaf724cd0a86b1bd19b5927131\"" Apr 13 19:21:24.208187 containerd[1489]: 2026-04-13 19:21:24.157 [WARNING][5523] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5b60475e7466061d777dc0f1b92a6ca6b85a2cfaf724cd0a86b1bd19b5927131" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--b--7ea64c4796-k8s-calico--apiserver--7cd4769649--bxqw4-eth0", GenerateName:"calico-apiserver-7cd4769649-", Namespace:"calico-system", SelfLink:"", UID:"a8e45c92-17a2-4c83-ac2b-aa6192ba9a42", ResourceVersion:"1023", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 19, 20, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7cd4769649", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-b-7ea64c4796", ContainerID:"1dc99eebfc8435881f5e6dd4dc716e971520eb6b4c8f5a33f098decfde571a5d", Pod:"calico-apiserver-7cd4769649-bxqw4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.62.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali9510e29a13c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 19:21:24.208187 containerd[1489]: 2026-04-13 19:21:24.157 [INFO][5523] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="5b60475e7466061d777dc0f1b92a6ca6b85a2cfaf724cd0a86b1bd19b5927131" Apr 13 19:21:24.208187 containerd[1489]: 2026-04-13 19:21:24.157 [INFO][5523] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5b60475e7466061d777dc0f1b92a6ca6b85a2cfaf724cd0a86b1bd19b5927131" iface="eth0" netns="" Apr 13 19:21:24.208187 containerd[1489]: 2026-04-13 19:21:24.157 [INFO][5523] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="5b60475e7466061d777dc0f1b92a6ca6b85a2cfaf724cd0a86b1bd19b5927131" Apr 13 19:21:24.208187 containerd[1489]: 2026-04-13 19:21:24.157 [INFO][5523] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="5b60475e7466061d777dc0f1b92a6ca6b85a2cfaf724cd0a86b1bd19b5927131" Apr 13 19:21:24.208187 containerd[1489]: 2026-04-13 19:21:24.186 [INFO][5530] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="5b60475e7466061d777dc0f1b92a6ca6b85a2cfaf724cd0a86b1bd19b5927131" HandleID="k8s-pod-network.5b60475e7466061d777dc0f1b92a6ca6b85a2cfaf724cd0a86b1bd19b5927131" Workload="ci--4081--3--7--b--7ea64c4796-k8s-calico--apiserver--7cd4769649--bxqw4-eth0" Apr 13 19:21:24.208187 containerd[1489]: 2026-04-13 19:21:24.187 [INFO][5530] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 19:21:24.208187 containerd[1489]: 2026-04-13 19:21:24.187 [INFO][5530] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 19:21:24.208187 containerd[1489]: 2026-04-13 19:21:24.198 [WARNING][5530] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="5b60475e7466061d777dc0f1b92a6ca6b85a2cfaf724cd0a86b1bd19b5927131" HandleID="k8s-pod-network.5b60475e7466061d777dc0f1b92a6ca6b85a2cfaf724cd0a86b1bd19b5927131" Workload="ci--4081--3--7--b--7ea64c4796-k8s-calico--apiserver--7cd4769649--bxqw4-eth0" Apr 13 19:21:24.208187 containerd[1489]: 2026-04-13 19:21:24.198 [INFO][5530] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="5b60475e7466061d777dc0f1b92a6ca6b85a2cfaf724cd0a86b1bd19b5927131" HandleID="k8s-pod-network.5b60475e7466061d777dc0f1b92a6ca6b85a2cfaf724cd0a86b1bd19b5927131" Workload="ci--4081--3--7--b--7ea64c4796-k8s-calico--apiserver--7cd4769649--bxqw4-eth0" Apr 13 19:21:24.208187 containerd[1489]: 2026-04-13 19:21:24.200 [INFO][5530] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 19:21:24.208187 containerd[1489]: 2026-04-13 19:21:24.204 [INFO][5523] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="5b60475e7466061d777dc0f1b92a6ca6b85a2cfaf724cd0a86b1bd19b5927131" Apr 13 19:21:24.208187 containerd[1489]: time="2026-04-13T19:21:24.206746987Z" level=info msg="TearDown network for sandbox \"5b60475e7466061d777dc0f1b92a6ca6b85a2cfaf724cd0a86b1bd19b5927131\" successfully" Apr 13 19:21:24.216338 containerd[1489]: time="2026-04-13T19:21:24.216169585Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5b60475e7466061d777dc0f1b92a6ca6b85a2cfaf724cd0a86b1bd19b5927131\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 13 19:21:24.216836 containerd[1489]: time="2026-04-13T19:21:24.216713939Z" level=info msg="RemovePodSandbox \"5b60475e7466061d777dc0f1b92a6ca6b85a2cfaf724cd0a86b1bd19b5927131\" returns successfully" Apr 13 19:21:24.218183 containerd[1489]: time="2026-04-13T19:21:24.218154854Z" level=info msg="StopPodSandbox for \"5256e062bc75846b331872bf0764115c5c34e5ad18b17031951ea827fe311a3c\"" Apr 13 19:21:24.311894 containerd[1489]: 2026-04-13 19:21:24.266 [WARNING][5544] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5256e062bc75846b331872bf0764115c5c34e5ad18b17031951ea827fe311a3c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--b--7ea64c4796-k8s-coredns--66bc5c9577--4mf58-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"37df8aee-778a-40ce-bb48-a75145d05544", ResourceVersion:"998", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 19, 20, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-b-7ea64c4796", ContainerID:"f1907663454aac4b795da0e2f665582c014f45ed245f0bfe02b32e7ddc52781a", Pod:"coredns-66bc5c9577-4mf58", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.62.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4cf04e948ca", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 19:21:24.311894 containerd[1489]: 2026-04-13 19:21:24.266 [INFO][5544] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="5256e062bc75846b331872bf0764115c5c34e5ad18b17031951ea827fe311a3c" Apr 13 19:21:24.311894 containerd[1489]: 2026-04-13 19:21:24.266 [INFO][5544] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5256e062bc75846b331872bf0764115c5c34e5ad18b17031951ea827fe311a3c" iface="eth0" netns="" Apr 13 19:21:24.311894 containerd[1489]: 2026-04-13 19:21:24.266 [INFO][5544] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="5256e062bc75846b331872bf0764115c5c34e5ad18b17031951ea827fe311a3c" Apr 13 19:21:24.311894 containerd[1489]: 2026-04-13 19:21:24.266 [INFO][5544] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="5256e062bc75846b331872bf0764115c5c34e5ad18b17031951ea827fe311a3c" Apr 13 19:21:24.311894 containerd[1489]: 2026-04-13 19:21:24.290 [INFO][5551] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="5256e062bc75846b331872bf0764115c5c34e5ad18b17031951ea827fe311a3c" HandleID="k8s-pod-network.5256e062bc75846b331872bf0764115c5c34e5ad18b17031951ea827fe311a3c" Workload="ci--4081--3--7--b--7ea64c4796-k8s-coredns--66bc5c9577--4mf58-eth0" Apr 13 19:21:24.311894 containerd[1489]: 2026-04-13 19:21:24.290 [INFO][5551] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 19:21:24.311894 containerd[1489]: 2026-04-13 19:21:24.290 [INFO][5551] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 19:21:24.311894 containerd[1489]: 2026-04-13 19:21:24.304 [WARNING][5551] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="5256e062bc75846b331872bf0764115c5c34e5ad18b17031951ea827fe311a3c" HandleID="k8s-pod-network.5256e062bc75846b331872bf0764115c5c34e5ad18b17031951ea827fe311a3c" Workload="ci--4081--3--7--b--7ea64c4796-k8s-coredns--66bc5c9577--4mf58-eth0" Apr 13 19:21:24.311894 containerd[1489]: 2026-04-13 19:21:24.304 [INFO][5551] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="5256e062bc75846b331872bf0764115c5c34e5ad18b17031951ea827fe311a3c" HandleID="k8s-pod-network.5256e062bc75846b331872bf0764115c5c34e5ad18b17031951ea827fe311a3c" Workload="ci--4081--3--7--b--7ea64c4796-k8s-coredns--66bc5c9577--4mf58-eth0" Apr 13 19:21:24.311894 containerd[1489]: 2026-04-13 19:21:24.306 [INFO][5551] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 19:21:24.311894 containerd[1489]: 2026-04-13 19:21:24.309 [INFO][5544] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="5256e062bc75846b331872bf0764115c5c34e5ad18b17031951ea827fe311a3c" Apr 13 19:21:24.312918 containerd[1489]: time="2026-04-13T19:21:24.311920732Z" level=info msg="TearDown network for sandbox \"5256e062bc75846b331872bf0764115c5c34e5ad18b17031951ea827fe311a3c\" successfully" Apr 13 19:21:24.312918 containerd[1489]: time="2026-04-13T19:21:24.311977539Z" level=info msg="StopPodSandbox for \"5256e062bc75846b331872bf0764115c5c34e5ad18b17031951ea827fe311a3c\" returns successfully" Apr 13 19:21:24.312918 containerd[1489]: time="2026-04-13T19:21:24.312670913Z" level=info msg="RemovePodSandbox for \"5256e062bc75846b331872bf0764115c5c34e5ad18b17031951ea827fe311a3c\"" Apr 13 19:21:24.312918 containerd[1489]: time="2026-04-13T19:21:24.312708638Z" level=info msg="Forcibly stopping sandbox \"5256e062bc75846b331872bf0764115c5c34e5ad18b17031951ea827fe311a3c\"" Apr 13 19:21:24.401413 containerd[1489]: 2026-04-13 19:21:24.357 [WARNING][5565] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5256e062bc75846b331872bf0764115c5c34e5ad18b17031951ea827fe311a3c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--b--7ea64c4796-k8s-coredns--66bc5c9577--4mf58-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"37df8aee-778a-40ce-bb48-a75145d05544", ResourceVersion:"998", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 19, 20, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-b-7ea64c4796", ContainerID:"f1907663454aac4b795da0e2f665582c014f45ed245f0bfe02b32e7ddc52781a", Pod:"coredns-66bc5c9577-4mf58", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.62.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4cf04e948ca", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 19:21:24.401413 containerd[1489]: 2026-04-13 19:21:24.357 [INFO][5565] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="5256e062bc75846b331872bf0764115c5c34e5ad18b17031951ea827fe311a3c" Apr 13 19:21:24.401413 containerd[1489]: 2026-04-13 19:21:24.357 [INFO][5565] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5256e062bc75846b331872bf0764115c5c34e5ad18b17031951ea827fe311a3c" iface="eth0" netns="" Apr 13 19:21:24.401413 containerd[1489]: 2026-04-13 19:21:24.357 [INFO][5565] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="5256e062bc75846b331872bf0764115c5c34e5ad18b17031951ea827fe311a3c" Apr 13 19:21:24.401413 containerd[1489]: 2026-04-13 19:21:24.357 [INFO][5565] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="5256e062bc75846b331872bf0764115c5c34e5ad18b17031951ea827fe311a3c" Apr 13 19:21:24.401413 containerd[1489]: 2026-04-13 19:21:24.381 [INFO][5572] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="5256e062bc75846b331872bf0764115c5c34e5ad18b17031951ea827fe311a3c" HandleID="k8s-pod-network.5256e062bc75846b331872bf0764115c5c34e5ad18b17031951ea827fe311a3c" Workload="ci--4081--3--7--b--7ea64c4796-k8s-coredns--66bc5c9577--4mf58-eth0" Apr 13 19:21:24.401413 containerd[1489]: 2026-04-13 19:21:24.381 [INFO][5572] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 19:21:24.401413 containerd[1489]: 2026-04-13 19:21:24.381 [INFO][5572] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 19:21:24.401413 containerd[1489]: 2026-04-13 19:21:24.393 [WARNING][5572] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="5256e062bc75846b331872bf0764115c5c34e5ad18b17031951ea827fe311a3c" HandleID="k8s-pod-network.5256e062bc75846b331872bf0764115c5c34e5ad18b17031951ea827fe311a3c" Workload="ci--4081--3--7--b--7ea64c4796-k8s-coredns--66bc5c9577--4mf58-eth0" Apr 13 19:21:24.401413 containerd[1489]: 2026-04-13 19:21:24.393 [INFO][5572] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="5256e062bc75846b331872bf0764115c5c34e5ad18b17031951ea827fe311a3c" HandleID="k8s-pod-network.5256e062bc75846b331872bf0764115c5c34e5ad18b17031951ea827fe311a3c" Workload="ci--4081--3--7--b--7ea64c4796-k8s-coredns--66bc5c9577--4mf58-eth0" Apr 13 19:21:24.401413 containerd[1489]: 2026-04-13 19:21:24.396 [INFO][5572] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 19:21:24.401413 containerd[1489]: 2026-04-13 19:21:24.399 [INFO][5565] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="5256e062bc75846b331872bf0764115c5c34e5ad18b17031951ea827fe311a3c" Apr 13 19:21:24.401932 containerd[1489]: time="2026-04-13T19:21:24.401474557Z" level=info msg="TearDown network for sandbox \"5256e062bc75846b331872bf0764115c5c34e5ad18b17031951ea827fe311a3c\" successfully" Apr 13 19:21:24.405561 containerd[1489]: time="2026-04-13T19:21:24.405487822Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5256e062bc75846b331872bf0764115c5c34e5ad18b17031951ea827fe311a3c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 13 19:21:24.405706 containerd[1489]: time="2026-04-13T19:21:24.405589756Z" level=info msg="RemovePodSandbox \"5256e062bc75846b331872bf0764115c5c34e5ad18b17031951ea827fe311a3c\" returns successfully" Apr 13 19:21:24.406238 containerd[1489]: time="2026-04-13T19:21:24.406192197Z" level=info msg="StopPodSandbox for \"045cd65edaa6c8f18fe5606fbaf84468781a0a29bd5558933eb0035e4743a128\"" Apr 13 19:21:24.493106 containerd[1489]: 2026-04-13 19:21:24.449 [WARNING][5586] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="045cd65edaa6c8f18fe5606fbaf84468781a0a29bd5558933eb0035e4743a128" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--b--7ea64c4796-k8s-csi--node--driver--t82fb-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"4e1c37d2-8528-4b9f-9fc1-29276f518f72", ResourceVersion:"984", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 19, 20, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"98cbb5577", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-b-7ea64c4796", ContainerID:"0528a911a5b463b9947405de3cbe35c47e7f5f05c17dcd7390c2b28a0d47e6d3", Pod:"csi-node-driver-t82fb", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.62.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calia6f6daaf5e7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 19:21:24.493106 containerd[1489]: 2026-04-13 19:21:24.449 [INFO][5586] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="045cd65edaa6c8f18fe5606fbaf84468781a0a29bd5558933eb0035e4743a128" Apr 13 19:21:24.493106 containerd[1489]: 2026-04-13 19:21:24.449 [INFO][5586] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="045cd65edaa6c8f18fe5606fbaf84468781a0a29bd5558933eb0035e4743a128" iface="eth0" netns="" Apr 13 19:21:24.493106 containerd[1489]: 2026-04-13 19:21:24.450 [INFO][5586] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="045cd65edaa6c8f18fe5606fbaf84468781a0a29bd5558933eb0035e4743a128" Apr 13 19:21:24.493106 containerd[1489]: 2026-04-13 19:21:24.450 [INFO][5586] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="045cd65edaa6c8f18fe5606fbaf84468781a0a29bd5558933eb0035e4743a128" Apr 13 19:21:24.493106 containerd[1489]: 2026-04-13 19:21:24.474 [INFO][5593] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="045cd65edaa6c8f18fe5606fbaf84468781a0a29bd5558933eb0035e4743a128" HandleID="k8s-pod-network.045cd65edaa6c8f18fe5606fbaf84468781a0a29bd5558933eb0035e4743a128" Workload="ci--4081--3--7--b--7ea64c4796-k8s-csi--node--driver--t82fb-eth0" Apr 13 19:21:24.493106 containerd[1489]: 2026-04-13 19:21:24.474 [INFO][5593] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 19:21:24.493106 containerd[1489]: 2026-04-13 19:21:24.474 [INFO][5593] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 19:21:24.493106 containerd[1489]: 2026-04-13 19:21:24.485 [WARNING][5593] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="045cd65edaa6c8f18fe5606fbaf84468781a0a29bd5558933eb0035e4743a128" HandleID="k8s-pod-network.045cd65edaa6c8f18fe5606fbaf84468781a0a29bd5558933eb0035e4743a128" Workload="ci--4081--3--7--b--7ea64c4796-k8s-csi--node--driver--t82fb-eth0" Apr 13 19:21:24.493106 containerd[1489]: 2026-04-13 19:21:24.485 [INFO][5593] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="045cd65edaa6c8f18fe5606fbaf84468781a0a29bd5558933eb0035e4743a128" HandleID="k8s-pod-network.045cd65edaa6c8f18fe5606fbaf84468781a0a29bd5558933eb0035e4743a128" Workload="ci--4081--3--7--b--7ea64c4796-k8s-csi--node--driver--t82fb-eth0" Apr 13 19:21:24.493106 containerd[1489]: 2026-04-13 19:21:24.487 [INFO][5593] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 19:21:24.493106 containerd[1489]: 2026-04-13 19:21:24.490 [INFO][5586] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="045cd65edaa6c8f18fe5606fbaf84468781a0a29bd5558933eb0035e4743a128" Apr 13 19:21:24.493106 containerd[1489]: time="2026-04-13T19:21:24.493007972Z" level=info msg="TearDown network for sandbox \"045cd65edaa6c8f18fe5606fbaf84468781a0a29bd5558933eb0035e4743a128\" successfully" Apr 13 19:21:24.493106 containerd[1489]: time="2026-04-13T19:21:24.493072460Z" level=info msg="StopPodSandbox for \"045cd65edaa6c8f18fe5606fbaf84468781a0a29bd5558933eb0035e4743a128\" returns successfully" Apr 13 19:21:24.495248 containerd[1489]: time="2026-04-13T19:21:24.494673598Z" level=info msg="RemovePodSandbox for \"045cd65edaa6c8f18fe5606fbaf84468781a0a29bd5558933eb0035e4743a128\"" Apr 13 19:21:24.495248 containerd[1489]: time="2026-04-13T19:21:24.494744447Z" level=info msg="Forcibly stopping sandbox \"045cd65edaa6c8f18fe5606fbaf84468781a0a29bd5558933eb0035e4743a128\"" Apr 13 19:21:24.585889 containerd[1489]: 2026-04-13 19:21:24.541 [WARNING][5607] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="045cd65edaa6c8f18fe5606fbaf84468781a0a29bd5558933eb0035e4743a128" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--b--7ea64c4796-k8s-csi--node--driver--t82fb-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"4e1c37d2-8528-4b9f-9fc1-29276f518f72", ResourceVersion:"984", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 19, 20, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"98cbb5577", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-b-7ea64c4796", ContainerID:"0528a911a5b463b9947405de3cbe35c47e7f5f05c17dcd7390c2b28a0d47e6d3", Pod:"csi-node-driver-t82fb", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.62.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calia6f6daaf5e7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 19:21:24.585889 containerd[1489]: 2026-04-13 19:21:24.542 [INFO][5607] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="045cd65edaa6c8f18fe5606fbaf84468781a0a29bd5558933eb0035e4743a128" Apr 13 19:21:24.585889 containerd[1489]: 2026-04-13 19:21:24.542 [INFO][5607] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="045cd65edaa6c8f18fe5606fbaf84468781a0a29bd5558933eb0035e4743a128" iface="eth0" netns="" Apr 13 19:21:24.585889 containerd[1489]: 2026-04-13 19:21:24.542 [INFO][5607] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="045cd65edaa6c8f18fe5606fbaf84468781a0a29bd5558933eb0035e4743a128" Apr 13 19:21:24.585889 containerd[1489]: 2026-04-13 19:21:24.542 [INFO][5607] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="045cd65edaa6c8f18fe5606fbaf84468781a0a29bd5558933eb0035e4743a128" Apr 13 19:21:24.585889 containerd[1489]: 2026-04-13 19:21:24.565 [INFO][5614] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="045cd65edaa6c8f18fe5606fbaf84468781a0a29bd5558933eb0035e4743a128" HandleID="k8s-pod-network.045cd65edaa6c8f18fe5606fbaf84468781a0a29bd5558933eb0035e4743a128" Workload="ci--4081--3--7--b--7ea64c4796-k8s-csi--node--driver--t82fb-eth0" Apr 13 19:21:24.585889 containerd[1489]: 2026-04-13 19:21:24.565 [INFO][5614] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 19:21:24.585889 containerd[1489]: 2026-04-13 19:21:24.565 [INFO][5614] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 19:21:24.585889 containerd[1489]: 2026-04-13 19:21:24.578 [WARNING][5614] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="045cd65edaa6c8f18fe5606fbaf84468781a0a29bd5558933eb0035e4743a128" HandleID="k8s-pod-network.045cd65edaa6c8f18fe5606fbaf84468781a0a29bd5558933eb0035e4743a128" Workload="ci--4081--3--7--b--7ea64c4796-k8s-csi--node--driver--t82fb-eth0" Apr 13 19:21:24.585889 containerd[1489]: 2026-04-13 19:21:24.578 [INFO][5614] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="045cd65edaa6c8f18fe5606fbaf84468781a0a29bd5558933eb0035e4743a128" HandleID="k8s-pod-network.045cd65edaa6c8f18fe5606fbaf84468781a0a29bd5558933eb0035e4743a128" Workload="ci--4081--3--7--b--7ea64c4796-k8s-csi--node--driver--t82fb-eth0" Apr 13 19:21:24.585889 containerd[1489]: 2026-04-13 19:21:24.580 [INFO][5614] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 19:21:24.585889 containerd[1489]: 2026-04-13 19:21:24.583 [INFO][5607] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="045cd65edaa6c8f18fe5606fbaf84468781a0a29bd5558933eb0035e4743a128" Apr 13 19:21:24.586955 containerd[1489]: time="2026-04-13T19:21:24.586537697Z" level=info msg="TearDown network for sandbox \"045cd65edaa6c8f18fe5606fbaf84468781a0a29bd5558933eb0035e4743a128\" successfully" Apr 13 19:21:24.592273 containerd[1489]: time="2026-04-13T19:21:24.592053405Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"045cd65edaa6c8f18fe5606fbaf84468781a0a29bd5558933eb0035e4743a128\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 13 19:21:24.592273 containerd[1489]: time="2026-04-13T19:21:24.592151058Z" level=info msg="RemovePodSandbox \"045cd65edaa6c8f18fe5606fbaf84468781a0a29bd5558933eb0035e4743a128\" returns successfully" Apr 13 19:21:24.594122 containerd[1489]: time="2026-04-13T19:21:24.592975050Z" level=info msg="StopPodSandbox for \"0d33cef8cb08a923763dbd2b1cc01b5badd2828378c2902e8c3c77f1000c85f0\"" Apr 13 19:21:24.701453 containerd[1489]: 2026-04-13 19:21:24.642 [WARNING][5629] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0d33cef8cb08a923763dbd2b1cc01b5badd2828378c2902e8c3c77f1000c85f0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--b--7ea64c4796-k8s-goldmane--cccfbd5cf--tlznz-eth0", GenerateName:"goldmane-cccfbd5cf-", Namespace:"calico-system", SelfLink:"", UID:"d1b93d06-471c-488c-b357-caa189e1a4ca", ResourceVersion:"1009", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 19, 20, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"cccfbd5cf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-b-7ea64c4796", ContainerID:"b012b6b71737554071412c482654841ca23d7e39e3567b38451653b33a616bac", Pod:"goldmane-cccfbd5cf-tlznz", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.62.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali3d0b1d83e92", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 19:21:24.701453 containerd[1489]: 2026-04-13 19:21:24.642 [INFO][5629] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="0d33cef8cb08a923763dbd2b1cc01b5badd2828378c2902e8c3c77f1000c85f0" Apr 13 19:21:24.701453 containerd[1489]: 2026-04-13 19:21:24.642 [INFO][5629] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0d33cef8cb08a923763dbd2b1cc01b5badd2828378c2902e8c3c77f1000c85f0" iface="eth0" netns="" Apr 13 19:21:24.701453 containerd[1489]: 2026-04-13 19:21:24.642 [INFO][5629] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="0d33cef8cb08a923763dbd2b1cc01b5badd2828378c2902e8c3c77f1000c85f0" Apr 13 19:21:24.701453 containerd[1489]: 2026-04-13 19:21:24.642 [INFO][5629] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="0d33cef8cb08a923763dbd2b1cc01b5badd2828378c2902e8c3c77f1000c85f0" Apr 13 19:21:24.701453 containerd[1489]: 2026-04-13 19:21:24.672 [INFO][5636] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="0d33cef8cb08a923763dbd2b1cc01b5badd2828378c2902e8c3c77f1000c85f0" HandleID="k8s-pod-network.0d33cef8cb08a923763dbd2b1cc01b5badd2828378c2902e8c3c77f1000c85f0" Workload="ci--4081--3--7--b--7ea64c4796-k8s-goldmane--cccfbd5cf--tlznz-eth0" Apr 13 19:21:24.701453 containerd[1489]: 2026-04-13 19:21:24.676 [INFO][5636] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 19:21:24.701453 containerd[1489]: 2026-04-13 19:21:24.676 [INFO][5636] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 19:21:24.701453 containerd[1489]: 2026-04-13 19:21:24.691 [WARNING][5636] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="0d33cef8cb08a923763dbd2b1cc01b5badd2828378c2902e8c3c77f1000c85f0" HandleID="k8s-pod-network.0d33cef8cb08a923763dbd2b1cc01b5badd2828378c2902e8c3c77f1000c85f0" Workload="ci--4081--3--7--b--7ea64c4796-k8s-goldmane--cccfbd5cf--tlznz-eth0" Apr 13 19:21:24.701453 containerd[1489]: 2026-04-13 19:21:24.691 [INFO][5636] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="0d33cef8cb08a923763dbd2b1cc01b5badd2828378c2902e8c3c77f1000c85f0" HandleID="k8s-pod-network.0d33cef8cb08a923763dbd2b1cc01b5badd2828378c2902e8c3c77f1000c85f0" Workload="ci--4081--3--7--b--7ea64c4796-k8s-goldmane--cccfbd5cf--tlznz-eth0" Apr 13 19:21:24.701453 containerd[1489]: 2026-04-13 19:21:24.694 [INFO][5636] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 19:21:24.701453 containerd[1489]: 2026-04-13 19:21:24.699 [INFO][5629] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="0d33cef8cb08a923763dbd2b1cc01b5badd2828378c2902e8c3c77f1000c85f0" Apr 13 19:21:24.701453 containerd[1489]: time="2026-04-13T19:21:24.701385433Z" level=info msg="TearDown network for sandbox \"0d33cef8cb08a923763dbd2b1cc01b5badd2828378c2902e8c3c77f1000c85f0\" successfully" Apr 13 19:21:24.701453 containerd[1489]: time="2026-04-13T19:21:24.701413877Z" level=info msg="StopPodSandbox for \"0d33cef8cb08a923763dbd2b1cc01b5badd2828378c2902e8c3c77f1000c85f0\" returns successfully" Apr 13 19:21:24.703138 containerd[1489]: time="2026-04-13T19:21:24.702789344Z" level=info msg="RemovePodSandbox for \"0d33cef8cb08a923763dbd2b1cc01b5badd2828378c2902e8c3c77f1000c85f0\"" Apr 13 19:21:24.703138 containerd[1489]: time="2026-04-13T19:21:24.702829189Z" level=info msg="Forcibly stopping sandbox \"0d33cef8cb08a923763dbd2b1cc01b5badd2828378c2902e8c3c77f1000c85f0\"" Apr 13 19:21:24.791327 containerd[1489]: 2026-04-13 19:21:24.749 [WARNING][5650] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0d33cef8cb08a923763dbd2b1cc01b5badd2828378c2902e8c3c77f1000c85f0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--b--7ea64c4796-k8s-goldmane--cccfbd5cf--tlznz-eth0", GenerateName:"goldmane-cccfbd5cf-", Namespace:"calico-system", SelfLink:"", UID:"d1b93d06-471c-488c-b357-caa189e1a4ca", ResourceVersion:"1009", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 19, 20, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"cccfbd5cf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-b-7ea64c4796", ContainerID:"b012b6b71737554071412c482654841ca23d7e39e3567b38451653b33a616bac", Pod:"goldmane-cccfbd5cf-tlznz", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.62.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali3d0b1d83e92", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 19:21:24.791327 containerd[1489]: 2026-04-13 19:21:24.749 [INFO][5650] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="0d33cef8cb08a923763dbd2b1cc01b5badd2828378c2902e8c3c77f1000c85f0" Apr 13 19:21:24.791327 containerd[1489]: 2026-04-13 19:21:24.749 [INFO][5650] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0d33cef8cb08a923763dbd2b1cc01b5badd2828378c2902e8c3c77f1000c85f0" iface="eth0" netns="" Apr 13 19:21:24.791327 containerd[1489]: 2026-04-13 19:21:24.749 [INFO][5650] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="0d33cef8cb08a923763dbd2b1cc01b5badd2828378c2902e8c3c77f1000c85f0" Apr 13 19:21:24.791327 containerd[1489]: 2026-04-13 19:21:24.749 [INFO][5650] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="0d33cef8cb08a923763dbd2b1cc01b5badd2828378c2902e8c3c77f1000c85f0" Apr 13 19:21:24.791327 containerd[1489]: 2026-04-13 19:21:24.773 [INFO][5657] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="0d33cef8cb08a923763dbd2b1cc01b5badd2828378c2902e8c3c77f1000c85f0" HandleID="k8s-pod-network.0d33cef8cb08a923763dbd2b1cc01b5badd2828378c2902e8c3c77f1000c85f0" Workload="ci--4081--3--7--b--7ea64c4796-k8s-goldmane--cccfbd5cf--tlznz-eth0" Apr 13 19:21:24.791327 containerd[1489]: 2026-04-13 19:21:24.773 [INFO][5657] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 19:21:24.791327 containerd[1489]: 2026-04-13 19:21:24.773 [INFO][5657] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 19:21:24.791327 containerd[1489]: 2026-04-13 19:21:24.784 [WARNING][5657] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="0d33cef8cb08a923763dbd2b1cc01b5badd2828378c2902e8c3c77f1000c85f0" HandleID="k8s-pod-network.0d33cef8cb08a923763dbd2b1cc01b5badd2828378c2902e8c3c77f1000c85f0" Workload="ci--4081--3--7--b--7ea64c4796-k8s-goldmane--cccfbd5cf--tlznz-eth0" Apr 13 19:21:24.791327 containerd[1489]: 2026-04-13 19:21:24.784 [INFO][5657] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="0d33cef8cb08a923763dbd2b1cc01b5badd2828378c2902e8c3c77f1000c85f0" HandleID="k8s-pod-network.0d33cef8cb08a923763dbd2b1cc01b5badd2828378c2902e8c3c77f1000c85f0" Workload="ci--4081--3--7--b--7ea64c4796-k8s-goldmane--cccfbd5cf--tlznz-eth0" Apr 13 19:21:24.791327 containerd[1489]: 2026-04-13 19:21:24.786 [INFO][5657] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 19:21:24.791327 containerd[1489]: 2026-04-13 19:21:24.789 [INFO][5650] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="0d33cef8cb08a923763dbd2b1cc01b5badd2828378c2902e8c3c77f1000c85f0" Apr 13 19:21:24.791327 containerd[1489]: time="2026-04-13T19:21:24.791287346Z" level=info msg="TearDown network for sandbox \"0d33cef8cb08a923763dbd2b1cc01b5badd2828378c2902e8c3c77f1000c85f0\" successfully" Apr 13 19:21:24.796265 containerd[1489]: time="2026-04-13T19:21:24.796203453Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0d33cef8cb08a923763dbd2b1cc01b5badd2828378c2902e8c3c77f1000c85f0\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 13 19:21:24.796639 containerd[1489]: time="2026-04-13T19:21:24.796359514Z" level=info msg="RemovePodSandbox \"0d33cef8cb08a923763dbd2b1cc01b5badd2828378c2902e8c3c77f1000c85f0\" returns successfully" Apr 13 19:21:24.797059 containerd[1489]: time="2026-04-13T19:21:24.796995200Z" level=info msg="StopPodSandbox for \"b11cd43a0dab65ed7876dc9388adefbcef863ead095672f7bdd7daa57d5b1469\"" Apr 13 19:21:24.886123 containerd[1489]: 2026-04-13 19:21:24.838 [WARNING][5671] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b11cd43a0dab65ed7876dc9388adefbcef863ead095672f7bdd7daa57d5b1469" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--b--7ea64c4796-k8s-calico--kube--controllers--658544489c--rrlcl-eth0", GenerateName:"calico-kube-controllers-658544489c-", Namespace:"calico-system", SelfLink:"", UID:"f07ad544-5f54-4fa0-9ff2-3bd524dd2c67", ResourceVersion:"1006", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 19, 20, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"658544489c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-b-7ea64c4796", ContainerID:"7555a22d23fb9599136ed10ac5edce62f1c034604af0f1c73f254d34335ce2b7", Pod:"calico-kube-controllers-658544489c-rrlcl", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.62.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali1f010a0e420", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 19:21:24.886123 containerd[1489]: 2026-04-13 19:21:24.839 [INFO][5671] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="b11cd43a0dab65ed7876dc9388adefbcef863ead095672f7bdd7daa57d5b1469" Apr 13 19:21:24.886123 containerd[1489]: 2026-04-13 19:21:24.839 [INFO][5671] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b11cd43a0dab65ed7876dc9388adefbcef863ead095672f7bdd7daa57d5b1469" iface="eth0" netns="" Apr 13 19:21:24.886123 containerd[1489]: 2026-04-13 19:21:24.839 [INFO][5671] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="b11cd43a0dab65ed7876dc9388adefbcef863ead095672f7bdd7daa57d5b1469" Apr 13 19:21:24.886123 containerd[1489]: 2026-04-13 19:21:24.839 [INFO][5671] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="b11cd43a0dab65ed7876dc9388adefbcef863ead095672f7bdd7daa57d5b1469" Apr 13 19:21:24.886123 containerd[1489]: 2026-04-13 19:21:24.863 [INFO][5678] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="b11cd43a0dab65ed7876dc9388adefbcef863ead095672f7bdd7daa57d5b1469" HandleID="k8s-pod-network.b11cd43a0dab65ed7876dc9388adefbcef863ead095672f7bdd7daa57d5b1469" Workload="ci--4081--3--7--b--7ea64c4796-k8s-calico--kube--controllers--658544489c--rrlcl-eth0" Apr 13 19:21:24.886123 containerd[1489]: 2026-04-13 19:21:24.863 [INFO][5678] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 19:21:24.886123 containerd[1489]: 2026-04-13 19:21:24.863 [INFO][5678] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 19:21:24.886123 containerd[1489]: 2026-04-13 19:21:24.877 [WARNING][5678] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="b11cd43a0dab65ed7876dc9388adefbcef863ead095672f7bdd7daa57d5b1469" HandleID="k8s-pod-network.b11cd43a0dab65ed7876dc9388adefbcef863ead095672f7bdd7daa57d5b1469" Workload="ci--4081--3--7--b--7ea64c4796-k8s-calico--kube--controllers--658544489c--rrlcl-eth0" Apr 13 19:21:24.886123 containerd[1489]: 2026-04-13 19:21:24.878 [INFO][5678] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="b11cd43a0dab65ed7876dc9388adefbcef863ead095672f7bdd7daa57d5b1469" HandleID="k8s-pod-network.b11cd43a0dab65ed7876dc9388adefbcef863ead095672f7bdd7daa57d5b1469" Workload="ci--4081--3--7--b--7ea64c4796-k8s-calico--kube--controllers--658544489c--rrlcl-eth0" Apr 13 19:21:24.886123 containerd[1489]: 2026-04-13 19:21:24.881 [INFO][5678] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 19:21:24.886123 containerd[1489]: 2026-04-13 19:21:24.883 [INFO][5671] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="b11cd43a0dab65ed7876dc9388adefbcef863ead095672f7bdd7daa57d5b1469" Apr 13 19:21:24.886123 containerd[1489]: time="2026-04-13T19:21:24.885959426Z" level=info msg="TearDown network for sandbox \"b11cd43a0dab65ed7876dc9388adefbcef863ead095672f7bdd7daa57d5b1469\" successfully" Apr 13 19:21:24.886123 containerd[1489]: time="2026-04-13T19:21:24.885989230Z" level=info msg="StopPodSandbox for \"b11cd43a0dab65ed7876dc9388adefbcef863ead095672f7bdd7daa57d5b1469\" returns successfully" Apr 13 19:21:24.887209 containerd[1489]: time="2026-04-13T19:21:24.886768536Z" level=info msg="RemovePodSandbox for \"b11cd43a0dab65ed7876dc9388adefbcef863ead095672f7bdd7daa57d5b1469\"" Apr 13 19:21:24.887209 containerd[1489]: time="2026-04-13T19:21:24.886817303Z" level=info msg="Forcibly stopping sandbox \"b11cd43a0dab65ed7876dc9388adefbcef863ead095672f7bdd7daa57d5b1469\"" Apr 13 19:21:24.988105 containerd[1489]: 2026-04-13 19:21:24.931 [WARNING][5692] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b11cd43a0dab65ed7876dc9388adefbcef863ead095672f7bdd7daa57d5b1469" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--b--7ea64c4796-k8s-calico--kube--controllers--658544489c--rrlcl-eth0", GenerateName:"calico-kube-controllers-658544489c-", Namespace:"calico-system", SelfLink:"", UID:"f07ad544-5f54-4fa0-9ff2-3bd524dd2c67", ResourceVersion:"1006", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 19, 20, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"658544489c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-b-7ea64c4796", ContainerID:"7555a22d23fb9599136ed10ac5edce62f1c034604af0f1c73f254d34335ce2b7", Pod:"calico-kube-controllers-658544489c-rrlcl", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.62.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali1f010a0e420", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 19:21:24.988105 containerd[1489]: 2026-04-13 19:21:24.931 [INFO][5692] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="b11cd43a0dab65ed7876dc9388adefbcef863ead095672f7bdd7daa57d5b1469" Apr 13 19:21:24.988105 containerd[1489]: 2026-04-13 19:21:24.932 [INFO][5692] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b11cd43a0dab65ed7876dc9388adefbcef863ead095672f7bdd7daa57d5b1469" iface="eth0" netns="" Apr 13 19:21:24.988105 containerd[1489]: 2026-04-13 19:21:24.932 [INFO][5692] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="b11cd43a0dab65ed7876dc9388adefbcef863ead095672f7bdd7daa57d5b1469" Apr 13 19:21:24.988105 containerd[1489]: 2026-04-13 19:21:24.932 [INFO][5692] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="b11cd43a0dab65ed7876dc9388adefbcef863ead095672f7bdd7daa57d5b1469" Apr 13 19:21:24.988105 containerd[1489]: 2026-04-13 19:21:24.962 [INFO][5699] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="b11cd43a0dab65ed7876dc9388adefbcef863ead095672f7bdd7daa57d5b1469" HandleID="k8s-pod-network.b11cd43a0dab65ed7876dc9388adefbcef863ead095672f7bdd7daa57d5b1469" Workload="ci--4081--3--7--b--7ea64c4796-k8s-calico--kube--controllers--658544489c--rrlcl-eth0" Apr 13 19:21:24.988105 containerd[1489]: 2026-04-13 19:21:24.963 [INFO][5699] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 19:21:24.988105 containerd[1489]: 2026-04-13 19:21:24.963 [INFO][5699] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 19:21:24.988105 containerd[1489]: 2026-04-13 19:21:24.976 [WARNING][5699] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="b11cd43a0dab65ed7876dc9388adefbcef863ead095672f7bdd7daa57d5b1469" HandleID="k8s-pod-network.b11cd43a0dab65ed7876dc9388adefbcef863ead095672f7bdd7daa57d5b1469" Workload="ci--4081--3--7--b--7ea64c4796-k8s-calico--kube--controllers--658544489c--rrlcl-eth0" Apr 13 19:21:24.988105 containerd[1489]: 2026-04-13 19:21:24.976 [INFO][5699] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="b11cd43a0dab65ed7876dc9388adefbcef863ead095672f7bdd7daa57d5b1469" HandleID="k8s-pod-network.b11cd43a0dab65ed7876dc9388adefbcef863ead095672f7bdd7daa57d5b1469" Workload="ci--4081--3--7--b--7ea64c4796-k8s-calico--kube--controllers--658544489c--rrlcl-eth0" Apr 13 19:21:24.988105 containerd[1489]: 2026-04-13 19:21:24.979 [INFO][5699] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 19:21:24.988105 containerd[1489]: 2026-04-13 19:21:24.984 [INFO][5692] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="b11cd43a0dab65ed7876dc9388adefbcef863ead095672f7bdd7daa57d5b1469" Apr 13 19:21:24.989279 containerd[1489]: time="2026-04-13T19:21:24.988118402Z" level=info msg="TearDown network for sandbox \"b11cd43a0dab65ed7876dc9388adefbcef863ead095672f7bdd7daa57d5b1469\" successfully" Apr 13 19:21:24.993823 containerd[1489]: time="2026-04-13T19:21:24.993761607Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b11cd43a0dab65ed7876dc9388adefbcef863ead095672f7bdd7daa57d5b1469\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 13 19:21:24.994272 containerd[1489]: time="2026-04-13T19:21:24.993881943Z" level=info msg="RemovePodSandbox \"b11cd43a0dab65ed7876dc9388adefbcef863ead095672f7bdd7daa57d5b1469\" returns successfully" Apr 13 19:21:24.994524 containerd[1489]: time="2026-04-13T19:21:24.994499507Z" level=info msg="StopPodSandbox for \"d44bc0200b39031551c9872907c7f685d1954a93f2a8209c63b99c5269887c94\"" Apr 13 19:21:25.094730 containerd[1489]: 2026-04-13 19:21:25.048 [WARNING][5713] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="d44bc0200b39031551c9872907c7f685d1954a93f2a8209c63b99c5269887c94" WorkloadEndpoint="ci--4081--3--7--b--7ea64c4796-k8s-whisker--5f5f6bcd87--76n4z-eth0" Apr 13 19:21:25.094730 containerd[1489]: 2026-04-13 19:21:25.048 [INFO][5713] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="d44bc0200b39031551c9872907c7f685d1954a93f2a8209c63b99c5269887c94" Apr 13 19:21:25.094730 containerd[1489]: 2026-04-13 19:21:25.048 [INFO][5713] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d44bc0200b39031551c9872907c7f685d1954a93f2a8209c63b99c5269887c94" iface="eth0" netns="" Apr 13 19:21:25.094730 containerd[1489]: 2026-04-13 19:21:25.048 [INFO][5713] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="d44bc0200b39031551c9872907c7f685d1954a93f2a8209c63b99c5269887c94" Apr 13 19:21:25.094730 containerd[1489]: 2026-04-13 19:21:25.048 [INFO][5713] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="d44bc0200b39031551c9872907c7f685d1954a93f2a8209c63b99c5269887c94" Apr 13 19:21:25.094730 containerd[1489]: 2026-04-13 19:21:25.075 [INFO][5720] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="d44bc0200b39031551c9872907c7f685d1954a93f2a8209c63b99c5269887c94" HandleID="k8s-pod-network.d44bc0200b39031551c9872907c7f685d1954a93f2a8209c63b99c5269887c94" Workload="ci--4081--3--7--b--7ea64c4796-k8s-whisker--5f5f6bcd87--76n4z-eth0" Apr 13 19:21:25.094730 containerd[1489]: 2026-04-13 19:21:25.075 [INFO][5720] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 19:21:25.094730 containerd[1489]: 2026-04-13 19:21:25.075 [INFO][5720] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 19:21:25.094730 containerd[1489]: 2026-04-13 19:21:25.087 [WARNING][5720] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="d44bc0200b39031551c9872907c7f685d1954a93f2a8209c63b99c5269887c94" HandleID="k8s-pod-network.d44bc0200b39031551c9872907c7f685d1954a93f2a8209c63b99c5269887c94" Workload="ci--4081--3--7--b--7ea64c4796-k8s-whisker--5f5f6bcd87--76n4z-eth0" Apr 13 19:21:25.094730 containerd[1489]: 2026-04-13 19:21:25.087 [INFO][5720] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="d44bc0200b39031551c9872907c7f685d1954a93f2a8209c63b99c5269887c94" HandleID="k8s-pod-network.d44bc0200b39031551c9872907c7f685d1954a93f2a8209c63b99c5269887c94" Workload="ci--4081--3--7--b--7ea64c4796-k8s-whisker--5f5f6bcd87--76n4z-eth0" Apr 13 19:21:25.094730 containerd[1489]: 2026-04-13 19:21:25.089 [INFO][5720] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 19:21:25.094730 containerd[1489]: 2026-04-13 19:21:25.092 [INFO][5713] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="d44bc0200b39031551c9872907c7f685d1954a93f2a8209c63b99c5269887c94" Apr 13 19:21:25.094730 containerd[1489]: time="2026-04-13T19:21:25.094338492Z" level=info msg="TearDown network for sandbox \"d44bc0200b39031551c9872907c7f685d1954a93f2a8209c63b99c5269887c94\" successfully" Apr 13 19:21:25.094730 containerd[1489]: time="2026-04-13T19:21:25.094369776Z" level=info msg="StopPodSandbox for \"d44bc0200b39031551c9872907c7f685d1954a93f2a8209c63b99c5269887c94\" returns successfully" Apr 13 19:21:25.096845 containerd[1489]: time="2026-04-13T19:21:25.095995194Z" level=info msg="RemovePodSandbox for \"d44bc0200b39031551c9872907c7f685d1954a93f2a8209c63b99c5269887c94\"" Apr 13 19:21:25.096845 containerd[1489]: time="2026-04-13T19:21:25.096196261Z" level=info msg="Forcibly stopping sandbox \"d44bc0200b39031551c9872907c7f685d1954a93f2a8209c63b99c5269887c94\"" Apr 13 19:21:25.190749 containerd[1489]: 2026-04-13 19:21:25.138 [WARNING][5734] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="d44bc0200b39031551c9872907c7f685d1954a93f2a8209c63b99c5269887c94" WorkloadEndpoint="ci--4081--3--7--b--7ea64c4796-k8s-whisker--5f5f6bcd87--76n4z-eth0" Apr 13 19:21:25.190749 containerd[1489]: 2026-04-13 19:21:25.139 [INFO][5734] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="d44bc0200b39031551c9872907c7f685d1954a93f2a8209c63b99c5269887c94" Apr 13 19:21:25.190749 containerd[1489]: 2026-04-13 19:21:25.139 [INFO][5734] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d44bc0200b39031551c9872907c7f685d1954a93f2a8209c63b99c5269887c94" iface="eth0" netns="" Apr 13 19:21:25.190749 containerd[1489]: 2026-04-13 19:21:25.139 [INFO][5734] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="d44bc0200b39031551c9872907c7f685d1954a93f2a8209c63b99c5269887c94" Apr 13 19:21:25.190749 containerd[1489]: 2026-04-13 19:21:25.139 [INFO][5734] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="d44bc0200b39031551c9872907c7f685d1954a93f2a8209c63b99c5269887c94" Apr 13 19:21:25.190749 containerd[1489]: 2026-04-13 19:21:25.171 [INFO][5741] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="d44bc0200b39031551c9872907c7f685d1954a93f2a8209c63b99c5269887c94" HandleID="k8s-pod-network.d44bc0200b39031551c9872907c7f685d1954a93f2a8209c63b99c5269887c94" Workload="ci--4081--3--7--b--7ea64c4796-k8s-whisker--5f5f6bcd87--76n4z-eth0" Apr 13 19:21:25.190749 containerd[1489]: 2026-04-13 19:21:25.171 [INFO][5741] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 19:21:25.190749 containerd[1489]: 2026-04-13 19:21:25.171 [INFO][5741] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 19:21:25.190749 containerd[1489]: 2026-04-13 19:21:25.183 [WARNING][5741] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="d44bc0200b39031551c9872907c7f685d1954a93f2a8209c63b99c5269887c94" HandleID="k8s-pod-network.d44bc0200b39031551c9872907c7f685d1954a93f2a8209c63b99c5269887c94" Workload="ci--4081--3--7--b--7ea64c4796-k8s-whisker--5f5f6bcd87--76n4z-eth0" Apr 13 19:21:25.190749 containerd[1489]: 2026-04-13 19:21:25.184 [INFO][5741] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="d44bc0200b39031551c9872907c7f685d1954a93f2a8209c63b99c5269887c94" HandleID="k8s-pod-network.d44bc0200b39031551c9872907c7f685d1954a93f2a8209c63b99c5269887c94" Workload="ci--4081--3--7--b--7ea64c4796-k8s-whisker--5f5f6bcd87--76n4z-eth0" Apr 13 19:21:25.190749 containerd[1489]: 2026-04-13 19:21:25.186 [INFO][5741] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 19:21:25.190749 containerd[1489]: 2026-04-13 19:21:25.188 [INFO][5734] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="d44bc0200b39031551c9872907c7f685d1954a93f2a8209c63b99c5269887c94" Apr 13 19:21:25.191260 containerd[1489]: time="2026-04-13T19:21:25.190790493Z" level=info msg="TearDown network for sandbox \"d44bc0200b39031551c9872907c7f685d1954a93f2a8209c63b99c5269887c94\" successfully" Apr 13 19:21:25.196722 containerd[1489]: time="2026-04-13T19:21:25.196656038Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d44bc0200b39031551c9872907c7f685d1954a93f2a8209c63b99c5269887c94\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 13 19:21:25.196858 containerd[1489]: time="2026-04-13T19:21:25.196778135Z" level=info msg="RemovePodSandbox \"d44bc0200b39031551c9872907c7f685d1954a93f2a8209c63b99c5269887c94\" returns successfully" Apr 13 19:21:25.197365 containerd[1489]: time="2026-04-13T19:21:25.197328928Z" level=info msg="StopPodSandbox for \"a0bdc0b52934e5ce608ded3fcc82278b9d7df978e6cb5160774f22130884d4fa\"" Apr 13 19:21:25.297257 containerd[1489]: 2026-04-13 19:21:25.249 [WARNING][5755] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a0bdc0b52934e5ce608ded3fcc82278b9d7df978e6cb5160774f22130884d4fa" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--b--7ea64c4796-k8s-coredns--66bc5c9577--gxbxf-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"81f175d8-d90c-45ef-9f27-602d07f6d20c", ResourceVersion:"962", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 19, 20, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-b-7ea64c4796", ContainerID:"552535cb6292e9e62063563fb70c997add1bd95414d6cc50e7e16fe710bff0c6", Pod:"coredns-66bc5c9577-gxbxf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.62.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9679fda3ab7", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 19:21:25.297257 containerd[1489]: 2026-04-13 19:21:25.249 [INFO][5755] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="a0bdc0b52934e5ce608ded3fcc82278b9d7df978e6cb5160774f22130884d4fa" Apr 13 19:21:25.297257 containerd[1489]: 2026-04-13 19:21:25.249 [INFO][5755] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a0bdc0b52934e5ce608ded3fcc82278b9d7df978e6cb5160774f22130884d4fa" iface="eth0" netns="" Apr 13 19:21:25.297257 containerd[1489]: 2026-04-13 19:21:25.249 [INFO][5755] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="a0bdc0b52934e5ce608ded3fcc82278b9d7df978e6cb5160774f22130884d4fa" Apr 13 19:21:25.297257 containerd[1489]: 2026-04-13 19:21:25.249 [INFO][5755] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="a0bdc0b52934e5ce608ded3fcc82278b9d7df978e6cb5160774f22130884d4fa" Apr 13 19:21:25.297257 containerd[1489]: 2026-04-13 19:21:25.273 [INFO][5762] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="a0bdc0b52934e5ce608ded3fcc82278b9d7df978e6cb5160774f22130884d4fa" HandleID="k8s-pod-network.a0bdc0b52934e5ce608ded3fcc82278b9d7df978e6cb5160774f22130884d4fa" Workload="ci--4081--3--7--b--7ea64c4796-k8s-coredns--66bc5c9577--gxbxf-eth0" Apr 13 19:21:25.297257 containerd[1489]: 2026-04-13 19:21:25.273 [INFO][5762] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 19:21:25.297257 containerd[1489]: 2026-04-13 19:21:25.273 [INFO][5762] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 19:21:25.297257 containerd[1489]: 2026-04-13 19:21:25.286 [WARNING][5762] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="a0bdc0b52934e5ce608ded3fcc82278b9d7df978e6cb5160774f22130884d4fa" HandleID="k8s-pod-network.a0bdc0b52934e5ce608ded3fcc82278b9d7df978e6cb5160774f22130884d4fa" Workload="ci--4081--3--7--b--7ea64c4796-k8s-coredns--66bc5c9577--gxbxf-eth0" Apr 13 19:21:25.297257 containerd[1489]: 2026-04-13 19:21:25.286 [INFO][5762] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="a0bdc0b52934e5ce608ded3fcc82278b9d7df978e6cb5160774f22130884d4fa" HandleID="k8s-pod-network.a0bdc0b52934e5ce608ded3fcc82278b9d7df978e6cb5160774f22130884d4fa" Workload="ci--4081--3--7--b--7ea64c4796-k8s-coredns--66bc5c9577--gxbxf-eth0" Apr 13 19:21:25.297257 containerd[1489]: 2026-04-13 19:21:25.291 [INFO][5762] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 19:21:25.297257 containerd[1489]: 2026-04-13 19:21:25.294 [INFO][5755] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="a0bdc0b52934e5ce608ded3fcc82278b9d7df978e6cb5160774f22130884d4fa" Apr 13 19:21:25.297908 containerd[1489]: time="2026-04-13T19:21:25.297299120Z" level=info msg="TearDown network for sandbox \"a0bdc0b52934e5ce608ded3fcc82278b9d7df978e6cb5160774f22130884d4fa\" successfully" Apr 13 19:21:25.297908 containerd[1489]: time="2026-04-13T19:21:25.297327924Z" level=info msg="StopPodSandbox for \"a0bdc0b52934e5ce608ded3fcc82278b9d7df978e6cb5160774f22130884d4fa\" returns successfully" Apr 13 19:21:25.298198 containerd[1489]: time="2026-04-13T19:21:25.297933885Z" level=info msg="RemovePodSandbox for \"a0bdc0b52934e5ce608ded3fcc82278b9d7df978e6cb5160774f22130884d4fa\"" Apr 13 19:21:25.298198 containerd[1489]: time="2026-04-13T19:21:25.298159875Z" level=info msg="Forcibly stopping sandbox \"a0bdc0b52934e5ce608ded3fcc82278b9d7df978e6cb5160774f22130884d4fa\"" Apr 13 19:21:25.392463 containerd[1489]: 2026-04-13 19:21:25.343 [WARNING][5777] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a0bdc0b52934e5ce608ded3fcc82278b9d7df978e6cb5160774f22130884d4fa" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--b--7ea64c4796-k8s-coredns--66bc5c9577--gxbxf-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"81f175d8-d90c-45ef-9f27-602d07f6d20c", ResourceVersion:"962", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 19, 20, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-b-7ea64c4796", ContainerID:"552535cb6292e9e62063563fb70c997add1bd95414d6cc50e7e16fe710bff0c6", Pod:"coredns-66bc5c9577-gxbxf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.62.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9679fda3ab7", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 19:21:25.392463 containerd[1489]: 2026-04-13 19:21:25.344 [INFO][5777] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="a0bdc0b52934e5ce608ded3fcc82278b9d7df978e6cb5160774f22130884d4fa" Apr 13 19:21:25.392463 containerd[1489]: 2026-04-13 19:21:25.344 [INFO][5777] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a0bdc0b52934e5ce608ded3fcc82278b9d7df978e6cb5160774f22130884d4fa" iface="eth0" netns="" Apr 13 19:21:25.392463 containerd[1489]: 2026-04-13 19:21:25.344 [INFO][5777] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="a0bdc0b52934e5ce608ded3fcc82278b9d7df978e6cb5160774f22130884d4fa" Apr 13 19:21:25.392463 containerd[1489]: 2026-04-13 19:21:25.344 [INFO][5777] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="a0bdc0b52934e5ce608ded3fcc82278b9d7df978e6cb5160774f22130884d4fa" Apr 13 19:21:25.392463 containerd[1489]: 2026-04-13 19:21:25.372 [INFO][5784] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="a0bdc0b52934e5ce608ded3fcc82278b9d7df978e6cb5160774f22130884d4fa" HandleID="k8s-pod-network.a0bdc0b52934e5ce608ded3fcc82278b9d7df978e6cb5160774f22130884d4fa" Workload="ci--4081--3--7--b--7ea64c4796-k8s-coredns--66bc5c9577--gxbxf-eth0" Apr 13 19:21:25.392463 containerd[1489]: 2026-04-13 19:21:25.372 [INFO][5784] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 19:21:25.392463 containerd[1489]: 2026-04-13 19:21:25.372 [INFO][5784] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 19:21:25.392463 containerd[1489]: 2026-04-13 19:21:25.384 [WARNING][5784] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="a0bdc0b52934e5ce608ded3fcc82278b9d7df978e6cb5160774f22130884d4fa" HandleID="k8s-pod-network.a0bdc0b52934e5ce608ded3fcc82278b9d7df978e6cb5160774f22130884d4fa" Workload="ci--4081--3--7--b--7ea64c4796-k8s-coredns--66bc5c9577--gxbxf-eth0" Apr 13 19:21:25.392463 containerd[1489]: 2026-04-13 19:21:25.384 [INFO][5784] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="a0bdc0b52934e5ce608ded3fcc82278b9d7df978e6cb5160774f22130884d4fa" HandleID="k8s-pod-network.a0bdc0b52934e5ce608ded3fcc82278b9d7df978e6cb5160774f22130884d4fa" Workload="ci--4081--3--7--b--7ea64c4796-k8s-coredns--66bc5c9577--gxbxf-eth0" Apr 13 19:21:25.392463 containerd[1489]: 2026-04-13 19:21:25.387 [INFO][5784] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 19:21:25.392463 containerd[1489]: 2026-04-13 19:21:25.389 [INFO][5777] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="a0bdc0b52934e5ce608ded3fcc82278b9d7df978e6cb5160774f22130884d4fa" Apr 13 19:21:25.392978 containerd[1489]: time="2026-04-13T19:21:25.392529877Z" level=info msg="TearDown network for sandbox \"a0bdc0b52934e5ce608ded3fcc82278b9d7df978e6cb5160774f22130884d4fa\" successfully" Apr 13 19:21:25.403040 containerd[1489]: time="2026-04-13T19:21:25.402944032Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a0bdc0b52934e5ce608ded3fcc82278b9d7df978e6cb5160774f22130884d4fa\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 13 19:21:25.403233 containerd[1489]: time="2026-04-13T19:21:25.403068929Z" level=info msg="RemovePodSandbox \"a0bdc0b52934e5ce608ded3fcc82278b9d7df978e6cb5160774f22130884d4fa\" returns successfully" Apr 13 19:21:26.394068 containerd[1489]: time="2026-04-13T19:21:26.393973533Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:21:26.395592 containerd[1489]: time="2026-04-13T19:21:26.395397841Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.31.4: active requests=0, bytes read=49189955" Apr 13 19:21:26.396829 containerd[1489]: time="2026-04-13T19:21:26.396485945Z" level=info msg="ImageCreate event name:\"sha256:e80fe1ce4f06b0791c077492cd9d5ebf00125a02bbafdcd04d2a64e10cc4ad95\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:21:26.399410 containerd[1489]: time="2026-04-13T19:21:26.399349764Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:21:26.400771 containerd[1489]: time="2026-04-13T19:21:26.400720826Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" with image id \"sha256:e80fe1ce4f06b0791c077492cd9d5ebf00125a02bbafdcd04d2a64e10cc4ad95\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\", size \"50587448\" in 3.891458732s" Apr 13 19:21:26.400944 containerd[1489]: time="2026-04-13T19:21:26.400922493Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" returns image reference \"sha256:e80fe1ce4f06b0791c077492cd9d5ebf00125a02bbafdcd04d2a64e10cc4ad95\"" Apr 13 19:21:26.403723 containerd[1489]: time="2026-04-13T19:21:26.402988926Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\"" Apr 13 19:21:26.440882 containerd[1489]: time="2026-04-13T19:21:26.440837737Z" level=info msg="CreateContainer within sandbox \"7555a22d23fb9599136ed10ac5edce62f1c034604af0f1c73f254d34335ce2b7\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Apr 13 19:21:26.462894 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2000808433.mount: Deactivated successfully. Apr 13 19:21:26.464671 containerd[1489]: time="2026-04-13T19:21:26.464455024Z" level=info msg="CreateContainer within sandbox \"7555a22d23fb9599136ed10ac5edce62f1c034604af0f1c73f254d34335ce2b7\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"6d32a03bbb66eb9c283afa0c52dded8f2176a79c73d584ef68d2233216ca39bc\"" Apr 13 19:21:26.469716 containerd[1489]: time="2026-04-13T19:21:26.466764250Z" level=info msg="StartContainer for \"6d32a03bbb66eb9c283afa0c52dded8f2176a79c73d584ef68d2233216ca39bc\"" Apr 13 19:21:26.522274 systemd[1]: Started cri-containerd-6d32a03bbb66eb9c283afa0c52dded8f2176a79c73d584ef68d2233216ca39bc.scope - libcontainer container 6d32a03bbb66eb9c283afa0c52dded8f2176a79c73d584ef68d2233216ca39bc. Apr 13 19:21:26.566927 containerd[1489]: time="2026-04-13T19:21:26.566691159Z" level=info msg="StartContainer for \"6d32a03bbb66eb9c283afa0c52dded8f2176a79c73d584ef68d2233216ca39bc\" returns successfully" Apr 13 19:21:27.205931 kubelet[2589]: I0413 19:21:27.205492 2589 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-658544489c-rrlcl" podStartSLOduration=35.197114837 podStartE2EDuration="43.205466628s" podCreationTimestamp="2026-04-13 19:20:44 +0000 UTC" firstStartedPulling="2026-04-13 19:21:18.394426627 +0000 UTC m=+54.958137753" lastFinishedPulling="2026-04-13 19:21:26.402778338 +0000 UTC m=+62.966489544" observedRunningTime="2026-04-13 19:21:27.202596853 +0000 UTC m=+63.766308059" watchObservedRunningTime="2026-04-13 19:21:27.205466628 +0000 UTC m=+63.769177794" Apr 13 19:21:28.125738 containerd[1489]: time="2026-04-13T19:21:28.125650773Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:21:28.127584 containerd[1489]: time="2026-04-13T19:21:28.127501012Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4: active requests=0, bytes read=13766291" Apr 13 19:21:28.128781 containerd[1489]: time="2026-04-13T19:21:28.128703088Z" level=info msg="ImageCreate event name:\"sha256:8195c49a3b504e7ef58a8fc9a0e9ae66ae6ae90ef4998c04591be9588e8fa07e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:21:28.133119 containerd[1489]: time="2026-04-13T19:21:28.132829223Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:21:28.133688 containerd[1489]: time="2026-04-13T19:21:28.133643888Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" with image id \"sha256:8195c49a3b504e7ef58a8fc9a0e9ae66ae6ae90ef4998c04591be9588e8fa07e\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\", size \"15163768\" in 1.730579712s" Apr 13 19:21:28.133866 containerd[1489]: time="2026-04-13T19:21:28.133686374Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" returns image reference \"sha256:8195c49a3b504e7ef58a8fc9a0e9ae66ae6ae90ef4998c04591be9588e8fa07e\"" Apr 13 19:21:28.141293 containerd[1489]: time="2026-04-13T19:21:28.141248193Z" level=info msg="CreateContainer within sandbox \"0528a911a5b463b9947405de3cbe35c47e7f5f05c17dcd7390c2b28a0d47e6d3\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Apr 13 19:21:28.162388 containerd[1489]: time="2026-04-13T19:21:28.162328364Z" level=info msg="CreateContainer within sandbox \"0528a911a5b463b9947405de3cbe35c47e7f5f05c17dcd7390c2b28a0d47e6d3\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"25d96ee2a66f0e2001d8667e3040ab3bf6278c7449f30696855390e318b559a0\"" Apr 13 19:21:28.164970 containerd[1489]: time="2026-04-13T19:21:28.164890976Z" level=info msg="StartContainer for \"25d96ee2a66f0e2001d8667e3040ab3bf6278c7449f30696855390e318b559a0\"" Apr 13 19:21:28.217349 systemd[1]: Started cri-containerd-25d96ee2a66f0e2001d8667e3040ab3bf6278c7449f30696855390e318b559a0.scope - libcontainer container 25d96ee2a66f0e2001d8667e3040ab3bf6278c7449f30696855390e318b559a0. Apr 13 19:21:28.258533 containerd[1489]: time="2026-04-13T19:21:28.258454857Z" level=info msg="StartContainer for \"25d96ee2a66f0e2001d8667e3040ab3bf6278c7449f30696855390e318b559a0\" returns successfully" Apr 13 19:21:28.753520 kubelet[2589]: I0413 19:21:28.752924 2589 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Apr 13 19:21:28.753520 kubelet[2589]: I0413 19:21:28.753005 2589 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Apr 13 19:21:29.233958 kubelet[2589]: I0413 19:21:29.233874 2589 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-t82fb" podStartSLOduration=34.868812821 podStartE2EDuration="45.23384924s" podCreationTimestamp="2026-04-13 19:20:44 +0000 UTC" firstStartedPulling="2026-04-13 19:21:17.770111624 +0000 UTC m=+54.333822790" lastFinishedPulling="2026-04-13 19:21:28.135148083 +0000 UTC m=+64.698859209" observedRunningTime="2026-04-13 19:21:29.23213382 +0000 UTC m=+65.795844986" watchObservedRunningTime="2026-04-13 19:21:29.23384924 +0000 UTC m=+65.797560406" Apr 13 19:21:30.569846 kubelet[2589]: I0413 19:21:30.569227 2589 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 13 19:22:20.122498 systemd[1]: run-containerd-runc-k8s.io-0676f1eea66813791d8d28aae7c13c4ba6af1e9b3268af310019bbc60db58845-runc.3bv3qb.mount: Deactivated successfully. Apr 13 19:22:47.477481 systemd[1]: Started sshd@7-178.105.12.165:22-50.85.169.122:43388.service - OpenSSH per-connection server daemon (50.85.169.122:43388). Apr 13 19:22:47.622065 sshd[6172]: Accepted publickey for core from 50.85.169.122 port 43388 ssh2: RSA SHA256:iZ69s7jdfZeZWl77uzTdj7kYKrt9+aDLOIz6i/Hnoms Apr 13 19:22:47.625079 sshd[6172]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:22:47.633423 systemd-logind[1461]: New session 8 of user core. Apr 13 19:22:47.640330 systemd[1]: Started session-8.scope - Session 8 of User core. Apr 13 19:22:47.850341 sshd[6172]: pam_unix(sshd:session): session closed for user core Apr 13 19:22:47.858890 systemd[1]: sshd@7-178.105.12.165:22-50.85.169.122:43388.service: Deactivated successfully. Apr 13 19:22:47.862323 systemd[1]: session-8.scope: Deactivated successfully. Apr 13 19:22:47.866537 systemd-logind[1461]: Session 8 logged out. Waiting for processes to exit. Apr 13 19:22:47.868669 systemd-logind[1461]: Removed session 8. Apr 13 19:22:52.889833 systemd[1]: Started sshd@8-178.105.12.165:22-50.85.169.122:38924.service - OpenSSH per-connection server daemon (50.85.169.122:38924). Apr 13 19:22:53.016097 sshd[6248]: Accepted publickey for core from 50.85.169.122 port 38924 ssh2: RSA SHA256:iZ69s7jdfZeZWl77uzTdj7kYKrt9+aDLOIz6i/Hnoms Apr 13 19:22:53.017463 sshd[6248]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:22:53.023644 systemd-logind[1461]: New session 9 of user core. Apr 13 19:22:53.030463 systemd[1]: Started session-9.scope - Session 9 of User core. Apr 13 19:22:53.247465 sshd[6248]: pam_unix(sshd:session): session closed for user core Apr 13 19:22:53.255805 systemd[1]: sshd@8-178.105.12.165:22-50.85.169.122:38924.service: Deactivated successfully. Apr 13 19:22:53.258865 systemd[1]: session-9.scope: Deactivated successfully. Apr 13 19:22:53.261303 systemd-logind[1461]: Session 9 logged out. Waiting for processes to exit. Apr 13 19:22:53.262870 systemd-logind[1461]: Removed session 9. Apr 13 19:22:58.279740 systemd[1]: Started sshd@9-178.105.12.165:22-50.85.169.122:38938.service - OpenSSH per-connection server daemon (50.85.169.122:38938). Apr 13 19:22:58.411811 sshd[6282]: Accepted publickey for core from 50.85.169.122 port 38938 ssh2: RSA SHA256:iZ69s7jdfZeZWl77uzTdj7kYKrt9+aDLOIz6i/Hnoms Apr 13 19:22:58.414183 sshd[6282]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:22:58.423672 systemd-logind[1461]: New session 10 of user core. Apr 13 19:22:58.432480 systemd[1]: Started session-10.scope - Session 10 of User core. Apr 13 19:22:58.634844 sshd[6282]: pam_unix(sshd:session): session closed for user core Apr 13 19:22:58.641966 systemd[1]: sshd@9-178.105.12.165:22-50.85.169.122:38938.service: Deactivated successfully. Apr 13 19:22:58.645891 systemd[1]: session-10.scope: Deactivated successfully. Apr 13 19:22:58.647089 systemd-logind[1461]: Session 10 logged out. Waiting for processes to exit. Apr 13 19:22:58.648442 systemd-logind[1461]: Removed session 10. Apr 13 19:23:03.674478 systemd[1]: Started sshd@10-178.105.12.165:22-50.85.169.122:38824.service - OpenSSH per-connection server daemon (50.85.169.122:38824). Apr 13 19:23:03.806912 sshd[6305]: Accepted publickey for core from 50.85.169.122 port 38824 ssh2: RSA SHA256:iZ69s7jdfZeZWl77uzTdj7kYKrt9+aDLOIz6i/Hnoms Apr 13 19:23:03.809130 sshd[6305]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:23:03.815971 systemd-logind[1461]: New session 11 of user core. Apr 13 19:23:03.824415 systemd[1]: Started session-11.scope - Session 11 of User core. Apr 13 19:23:04.002050 sshd[6305]: pam_unix(sshd:session): session closed for user core Apr 13 19:23:04.008893 systemd[1]: sshd@10-178.105.12.165:22-50.85.169.122:38824.service: Deactivated successfully. Apr 13 19:23:04.014085 systemd[1]: session-11.scope: Deactivated successfully. Apr 13 19:23:04.015450 systemd-logind[1461]: Session 11 logged out. Waiting for processes to exit. Apr 13 19:23:04.025862 systemd-logind[1461]: Removed session 11. Apr 13 19:23:04.034459 systemd[1]: Started sshd@11-178.105.12.165:22-50.85.169.122:38828.service - OpenSSH per-connection server daemon (50.85.169.122:38828). Apr 13 19:23:04.163107 sshd[6328]: Accepted publickey for core from 50.85.169.122 port 38828 ssh2: RSA SHA256:iZ69s7jdfZeZWl77uzTdj7kYKrt9+aDLOIz6i/Hnoms Apr 13 19:23:04.165633 sshd[6328]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:23:04.174460 systemd-logind[1461]: New session 12 of user core. Apr 13 19:23:04.184410 systemd[1]: Started session-12.scope - Session 12 of User core. Apr 13 19:23:04.437430 sshd[6328]: pam_unix(sshd:session): session closed for user core Apr 13 19:23:04.444806 systemd-logind[1461]: Session 12 logged out. Waiting for processes to exit. Apr 13 19:23:04.445063 systemd[1]: sshd@11-178.105.12.165:22-50.85.169.122:38828.service: Deactivated successfully. Apr 13 19:23:04.450745 systemd[1]: session-12.scope: Deactivated successfully. Apr 13 19:23:04.468078 systemd-logind[1461]: Removed session 12. Apr 13 19:23:04.476431 systemd[1]: Started sshd@12-178.105.12.165:22-50.85.169.122:38834.service - OpenSSH per-connection server daemon (50.85.169.122:38834). Apr 13 19:23:04.615220 sshd[6358]: Accepted publickey for core from 50.85.169.122 port 38834 ssh2: RSA SHA256:iZ69s7jdfZeZWl77uzTdj7kYKrt9+aDLOIz6i/Hnoms Apr 13 19:23:04.617114 sshd[6358]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:23:04.624162 systemd-logind[1461]: New session 13 of user core. Apr 13 19:23:04.633401 systemd[1]: Started session-13.scope - Session 13 of User core. Apr 13 19:23:04.827868 sshd[6358]: pam_unix(sshd:session): session closed for user core Apr 13 19:23:04.834744 systemd[1]: sshd@12-178.105.12.165:22-50.85.169.122:38834.service: Deactivated successfully. Apr 13 19:23:04.838876 systemd[1]: session-13.scope: Deactivated successfully. Apr 13 19:23:04.841379 systemd-logind[1461]: Session 13 logged out. Waiting for processes to exit. Apr 13 19:23:04.842713 systemd-logind[1461]: Removed session 13. Apr 13 19:23:09.872721 systemd[1]: Started sshd@13-178.105.12.165:22-50.85.169.122:47446.service - OpenSSH per-connection server daemon (50.85.169.122:47446). Apr 13 19:23:10.006401 sshd[6391]: Accepted publickey for core from 50.85.169.122 port 47446 ssh2: RSA SHA256:iZ69s7jdfZeZWl77uzTdj7kYKrt9+aDLOIz6i/Hnoms Apr 13 19:23:10.009128 sshd[6391]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:23:10.019423 systemd-logind[1461]: New session 14 of user core. Apr 13 19:23:10.028349 systemd[1]: Started session-14.scope - Session 14 of User core. Apr 13 19:23:10.218501 sshd[6391]: pam_unix(sshd:session): session closed for user core Apr 13 19:23:10.224864 systemd[1]: sshd@13-178.105.12.165:22-50.85.169.122:47446.service: Deactivated successfully. Apr 13 19:23:10.229732 systemd[1]: session-14.scope: Deactivated successfully. Apr 13 19:23:10.232749 systemd-logind[1461]: Session 14 logged out. Waiting for processes to exit. Apr 13 19:23:10.234593 systemd-logind[1461]: Removed session 14. Apr 13 19:23:15.258375 systemd[1]: Started sshd@14-178.105.12.165:22-50.85.169.122:47460.service - OpenSSH per-connection server daemon (50.85.169.122:47460). Apr 13 19:23:15.403306 sshd[6404]: Accepted publickey for core from 50.85.169.122 port 47460 ssh2: RSA SHA256:iZ69s7jdfZeZWl77uzTdj7kYKrt9+aDLOIz6i/Hnoms Apr 13 19:23:15.405420 sshd[6404]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:23:15.414080 systemd-logind[1461]: New session 15 of user core. Apr 13 19:23:15.418264 systemd[1]: Started session-15.scope - Session 15 of User core. Apr 13 19:23:15.600595 sshd[6404]: pam_unix(sshd:session): session closed for user core Apr 13 19:23:15.607707 systemd-logind[1461]: Session 15 logged out. Waiting for processes to exit. Apr 13 19:23:15.607834 systemd[1]: session-15.scope: Deactivated successfully. Apr 13 19:23:15.608504 systemd[1]: sshd@14-178.105.12.165:22-50.85.169.122:47460.service: Deactivated successfully. Apr 13 19:23:15.614643 systemd-logind[1461]: Removed session 15. Apr 13 19:23:15.629573 systemd[1]: Started sshd@15-178.105.12.165:22-50.85.169.122:47474.service - OpenSSH per-connection server daemon (50.85.169.122:47474). Apr 13 19:23:15.754929 sshd[6416]: Accepted publickey for core from 50.85.169.122 port 47474 ssh2: RSA SHA256:iZ69s7jdfZeZWl77uzTdj7kYKrt9+aDLOIz6i/Hnoms Apr 13 19:23:15.759135 sshd[6416]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:23:15.769858 systemd-logind[1461]: New session 16 of user core. Apr 13 19:23:15.778400 systemd[1]: Started session-16.scope - Session 16 of User core. Apr 13 19:23:16.198227 sshd[6416]: pam_unix(sshd:session): session closed for user core Apr 13 19:23:16.204893 systemd[1]: sshd@15-178.105.12.165:22-50.85.169.122:47474.service: Deactivated successfully. Apr 13 19:23:16.208833 systemd[1]: session-16.scope: Deactivated successfully. Apr 13 19:23:16.211440 systemd-logind[1461]: Session 16 logged out. Waiting for processes to exit. Apr 13 19:23:16.231471 systemd[1]: Started sshd@16-178.105.12.165:22-50.85.169.122:47476.service - OpenSSH per-connection server daemon (50.85.169.122:47476). Apr 13 19:23:16.234328 systemd-logind[1461]: Removed session 16. Apr 13 19:23:16.359490 sshd[6427]: Accepted publickey for core from 50.85.169.122 port 47476 ssh2: RSA SHA256:iZ69s7jdfZeZWl77uzTdj7kYKrt9+aDLOIz6i/Hnoms Apr 13 19:23:16.361887 sshd[6427]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:23:16.368831 systemd-logind[1461]: New session 17 of user core. Apr 13 19:23:16.373251 systemd[1]: Started session-17.scope - Session 17 of User core. Apr 13 19:23:17.266907 sshd[6427]: pam_unix(sshd:session): session closed for user core Apr 13 19:23:17.274082 systemd[1]: sshd@16-178.105.12.165:22-50.85.169.122:47476.service: Deactivated successfully. Apr 13 19:23:17.278862 systemd[1]: session-17.scope: Deactivated successfully. Apr 13 19:23:17.283632 systemd-logind[1461]: Session 17 logged out. Waiting for processes to exit. Apr 13 19:23:17.305192 systemd[1]: Started sshd@17-178.105.12.165:22-50.85.169.122:47478.service - OpenSSH per-connection server daemon (50.85.169.122:47478). Apr 13 19:23:17.307887 systemd-logind[1461]: Removed session 17. Apr 13 19:23:17.434222 sshd[6450]: Accepted publickey for core from 50.85.169.122 port 47478 ssh2: RSA SHA256:iZ69s7jdfZeZWl77uzTdj7kYKrt9+aDLOIz6i/Hnoms Apr 13 19:23:17.437238 sshd[6450]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:23:17.448003 systemd-logind[1461]: New session 18 of user core. Apr 13 19:23:17.453348 systemd[1]: Started session-18.scope - Session 18 of User core. Apr 13 19:23:17.791398 sshd[6450]: pam_unix(sshd:session): session closed for user core Apr 13 19:23:17.798197 systemd[1]: sshd@17-178.105.12.165:22-50.85.169.122:47478.service: Deactivated successfully. Apr 13 19:23:17.802810 systemd[1]: session-18.scope: Deactivated successfully. Apr 13 19:23:17.804897 systemd-logind[1461]: Session 18 logged out. Waiting for processes to exit. Apr 13 19:23:17.819809 systemd-logind[1461]: Removed session 18. Apr 13 19:23:17.829794 systemd[1]: Started sshd@18-178.105.12.165:22-50.85.169.122:47492.service - OpenSSH per-connection server daemon (50.85.169.122:47492). Apr 13 19:23:17.953071 sshd[6462]: Accepted publickey for core from 50.85.169.122 port 47492 ssh2: RSA SHA256:iZ69s7jdfZeZWl77uzTdj7kYKrt9+aDLOIz6i/Hnoms Apr 13 19:23:17.954926 sshd[6462]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:23:17.960685 systemd-logind[1461]: New session 19 of user core. Apr 13 19:23:17.969379 systemd[1]: Started session-19.scope - Session 19 of User core. Apr 13 19:23:18.194489 sshd[6462]: pam_unix(sshd:session): session closed for user core Apr 13 19:23:18.201117 systemd-logind[1461]: Session 19 logged out. Waiting for processes to exit. Apr 13 19:23:18.201912 systemd[1]: sshd@18-178.105.12.165:22-50.85.169.122:47492.service: Deactivated successfully. Apr 13 19:23:18.205542 systemd[1]: session-19.scope: Deactivated successfully. Apr 13 19:23:18.211153 systemd-logind[1461]: Removed session 19. Apr 13 19:23:20.150489 systemd[1]: run-containerd-runc-k8s.io-0676f1eea66813791d8d28aae7c13c4ba6af1e9b3268af310019bbc60db58845-runc.LZsP8R.mount: Deactivated successfully. Apr 13 19:23:23.228658 systemd[1]: Started sshd@19-178.105.12.165:22-50.85.169.122:33316.service - OpenSSH per-connection server daemon (50.85.169.122:33316). Apr 13 19:23:23.366961 sshd[6497]: Accepted publickey for core from 50.85.169.122 port 33316 ssh2: RSA SHA256:iZ69s7jdfZeZWl77uzTdj7kYKrt9+aDLOIz6i/Hnoms Apr 13 19:23:23.368813 sshd[6497]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:23:23.375947 systemd-logind[1461]: New session 20 of user core. Apr 13 19:23:23.385937 systemd[1]: Started session-20.scope - Session 20 of User core. Apr 13 19:23:23.595243 sshd[6497]: pam_unix(sshd:session): session closed for user core Apr 13 19:23:23.601514 systemd-logind[1461]: Session 20 logged out. Waiting for processes to exit. Apr 13 19:23:23.604325 systemd[1]: sshd@19-178.105.12.165:22-50.85.169.122:33316.service: Deactivated successfully. Apr 13 19:23:23.610141 systemd[1]: session-20.scope: Deactivated successfully. Apr 13 19:23:23.612068 systemd-logind[1461]: Removed session 20. Apr 13 19:23:28.629459 systemd[1]: Started sshd@20-178.105.12.165:22-50.85.169.122:33326.service - OpenSSH per-connection server daemon (50.85.169.122:33326). Apr 13 19:23:28.755195 sshd[6531]: Accepted publickey for core from 50.85.169.122 port 33326 ssh2: RSA SHA256:iZ69s7jdfZeZWl77uzTdj7kYKrt9+aDLOIz6i/Hnoms Apr 13 19:23:28.756273 sshd[6531]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:23:28.764845 systemd-logind[1461]: New session 21 of user core. Apr 13 19:23:28.770609 systemd[1]: Started session-21.scope - Session 21 of User core. Apr 13 19:23:28.953125 sshd[6531]: pam_unix(sshd:session): session closed for user core Apr 13 19:23:28.959891 systemd[1]: sshd@20-178.105.12.165:22-50.85.169.122:33326.service: Deactivated successfully. Apr 13 19:23:28.965768 systemd[1]: session-21.scope: Deactivated successfully. Apr 13 19:23:28.969200 systemd-logind[1461]: Session 21 logged out. Waiting for processes to exit. Apr 13 19:23:28.970356 systemd-logind[1461]: Removed session 21. Apr 13 19:23:43.585221 kubelet[2589]: E0413 19:23:43.584742 2589 controller.go:195] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:36464->10.0.0.2:2379: read: connection timed out" Apr 13 19:23:43.981731 systemd[1]: cri-containerd-4d925d464a991ccb037dc223aa05742b8cd8d61699c9f159062309742cae4053.scope: Deactivated successfully. Apr 13 19:23:43.984574 systemd[1]: cri-containerd-4d925d464a991ccb037dc223aa05742b8cd8d61699c9f159062309742cae4053.scope: Consumed 16.290s CPU time. Apr 13 19:23:44.018079 containerd[1489]: time="2026-04-13T19:23:44.014608002Z" level=info msg="shim disconnected" id=4d925d464a991ccb037dc223aa05742b8cd8d61699c9f159062309742cae4053 namespace=k8s.io Apr 13 19:23:44.018079 containerd[1489]: time="2026-04-13T19:23:44.014694235Z" level=warning msg="cleaning up after shim disconnected" id=4d925d464a991ccb037dc223aa05742b8cd8d61699c9f159062309742cae4053 namespace=k8s.io Apr 13 19:23:44.018079 containerd[1489]: time="2026-04-13T19:23:44.014726113Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 13 19:23:44.017865 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4d925d464a991ccb037dc223aa05742b8cd8d61699c9f159062309742cae4053-rootfs.mount: Deactivated successfully. Apr 13 19:23:44.269642 systemd[1]: cri-containerd-a4e065e4adaac5396364c53503f10a379d9cfab47ae75ac1ec449e88106e1dfd.scope: Deactivated successfully. Apr 13 19:23:44.269954 systemd[1]: cri-containerd-a4e065e4adaac5396364c53503f10a379d9cfab47ae75ac1ec449e88106e1dfd.scope: Consumed 4.831s CPU time, 16.9M memory peak, 0B memory swap peak. Apr 13 19:23:44.296704 containerd[1489]: time="2026-04-13T19:23:44.296637122Z" level=info msg="shim disconnected" id=a4e065e4adaac5396364c53503f10a379d9cfab47ae75ac1ec449e88106e1dfd namespace=k8s.io Apr 13 19:23:44.296704 containerd[1489]: time="2026-04-13T19:23:44.296695917Z" level=warning msg="cleaning up after shim disconnected" id=a4e065e4adaac5396364c53503f10a379d9cfab47ae75ac1ec449e88106e1dfd namespace=k8s.io Apr 13 19:23:44.296704 containerd[1489]: time="2026-04-13T19:23:44.296705197Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 13 19:23:44.298046 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a4e065e4adaac5396364c53503f10a379d9cfab47ae75ac1ec449e88106e1dfd-rootfs.mount: Deactivated successfully. Apr 13 19:23:44.756441 kubelet[2589]: I0413 19:23:44.752668 2589 scope.go:117] "RemoveContainer" containerID="a4e065e4adaac5396364c53503f10a379d9cfab47ae75ac1ec449e88106e1dfd" Apr 13 19:23:44.760382 kubelet[2589]: I0413 19:23:44.759937 2589 scope.go:117] "RemoveContainer" containerID="4d925d464a991ccb037dc223aa05742b8cd8d61699c9f159062309742cae4053" Apr 13 19:23:44.760576 containerd[1489]: time="2026-04-13T19:23:44.759526832Z" level=info msg="CreateContainer within sandbox \"100d50d6a8a4a97423460fb6f5196d773c244c15349b585783de8f2ea721c455\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Apr 13 19:23:44.792432 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount529025579.mount: Deactivated successfully. Apr 13 19:23:44.794794 containerd[1489]: time="2026-04-13T19:23:44.794624914Z" level=info msg="CreateContainer within sandbox \"05ce3559ae214f408864b80c4c99c5a3874e6a5574be27b799a92dee6fa0582a\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Apr 13 19:23:44.795875 containerd[1489]: time="2026-04-13T19:23:44.795816300Z" level=info msg="CreateContainer within sandbox \"100d50d6a8a4a97423460fb6f5196d773c244c15349b585783de8f2ea721c455\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"e77ed41aa2615f5ea7c53d1d990b306bf69e7afec02b12f2e058da4726b5b07d\"" Apr 13 19:23:44.797506 containerd[1489]: time="2026-04-13T19:23:44.797304463Z" level=info msg="StartContainer for \"e77ed41aa2615f5ea7c53d1d990b306bf69e7afec02b12f2e058da4726b5b07d\"" Apr 13 19:23:44.816243 containerd[1489]: time="2026-04-13T19:23:44.816178020Z" level=info msg="CreateContainer within sandbox \"05ce3559ae214f408864b80c4c99c5a3874e6a5574be27b799a92dee6fa0582a\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"b765493996aa19daa97d2c7b1445ef4252e7d043d13966e8b6fa860a4ba70cc9\"" Apr 13 19:23:44.817153 containerd[1489]: time="2026-04-13T19:23:44.817116227Z" level=info msg="StartContainer for \"b765493996aa19daa97d2c7b1445ef4252e7d043d13966e8b6fa860a4ba70cc9\"" Apr 13 19:23:44.835281 systemd[1]: Started cri-containerd-e77ed41aa2615f5ea7c53d1d990b306bf69e7afec02b12f2e058da4726b5b07d.scope - libcontainer container e77ed41aa2615f5ea7c53d1d990b306bf69e7afec02b12f2e058da4726b5b07d. Apr 13 19:23:44.858276 systemd[1]: Started cri-containerd-b765493996aa19daa97d2c7b1445ef4252e7d043d13966e8b6fa860a4ba70cc9.scope - libcontainer container b765493996aa19daa97d2c7b1445ef4252e7d043d13966e8b6fa860a4ba70cc9. Apr 13 19:23:44.901338 containerd[1489]: time="2026-04-13T19:23:44.901182221Z" level=info msg="StartContainer for \"e77ed41aa2615f5ea7c53d1d990b306bf69e7afec02b12f2e058da4726b5b07d\" returns successfully" Apr 13 19:23:44.920965 containerd[1489]: time="2026-04-13T19:23:44.920882634Z" level=info msg="StartContainer for \"b765493996aa19daa97d2c7b1445ef4252e7d043d13966e8b6fa860a4ba70cc9\" returns successfully" Apr 13 19:23:47.680632 kubelet[2589]: E0413 19:23:47.678431 2589 event.go:359] "Server rejected event (will not retry!)" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:36066->10.0.0.2:2379: read: connection timed out" event="&Event{ObjectMeta:{kube-apiserver-ci-4081-3-7-b-7ea64c4796.18a601036aa0d061 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-ci-4081-3-7-b-7ea64c4796,UID:36edd602cbd20ad738302bdb873875ec,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Readiness probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:ci-4081-3-7-b-7ea64c4796,},FirstTimestamp:2026-04-13 19:23:37.229693025 +0000 UTC m=+193.793404191,LastTimestamp:2026-04-13 19:23:37.229693025 +0000 UTC m=+193.793404191,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-7-b-7ea64c4796,}"