Apr 21 09:57:38.882152 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Apr 21 09:57:38.882179 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Tue Apr 21 08:40:46 -00 2026 Apr 21 09:57:38.882190 kernel: KASLR enabled Apr 21 09:57:38.882196 kernel: efi: EFI v2.7 by Ubuntu distribution of EDK II Apr 21 09:57:38.882202 kernel: efi: SMBIOS 3.0=0x139ed0000 MEMATTR=0x1390c1018 ACPI 2.0=0x136760018 RNG=0x13676e918 MEMRESERVE=0x136b43d18 Apr 21 09:57:38.882207 kernel: random: crng init done Apr 21 09:57:38.882214 kernel: ACPI: Early table checksum verification disabled Apr 21 09:57:38.882220 kernel: ACPI: RSDP 0x0000000136760018 000024 (v02 BOCHS ) Apr 21 09:57:38.882226 kernel: ACPI: XSDT 0x000000013676FE98 00006C (v01 BOCHS BXPC 00000001 01000013) Apr 21 09:57:38.882234 kernel: ACPI: FACP 0x000000013676FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Apr 21 09:57:38.882240 kernel: ACPI: DSDT 0x0000000136767518 001468 (v02 BOCHS BXPC 00000001 BXPC 00000001) Apr 21 09:57:38.882246 kernel: ACPI: APIC 0x000000013676FC18 000108 (v04 BOCHS BXPC 00000001 BXPC 00000001) Apr 21 09:57:38.882252 kernel: ACPI: PPTT 0x000000013676FD98 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Apr 21 09:57:38.882259 kernel: ACPI: GTDT 0x000000013676D898 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Apr 21 09:57:38.882266 kernel: ACPI: MCFG 0x000000013676FF98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 21 09:57:38.882274 kernel: ACPI: SPCR 0x000000013676E818 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Apr 21 09:57:38.882280 kernel: ACPI: DBG2 0x000000013676E898 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Apr 21 09:57:38.882287 kernel: ACPI: IORT 0x000000013676E418 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Apr 21 09:57:38.882293 kernel: ACPI: BGRT 0x000000013676E798 000038 (v01 INTEL EDK2 00000002 01000013) Apr 21 09:57:38.882300 kernel: ACPI: SPCR: console: pl011,mmio32,0x9000000,9600 Apr 21 09:57:38.882306 kernel: NUMA: Failed to initialise from firmware Apr 21 09:57:38.882312 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x0000000139ffffff] Apr 21 09:57:38.882319 kernel: NUMA: NODE_DATA [mem 0x139671800-0x139676fff] Apr 21 09:57:38.882325 kernel: Zone ranges: Apr 21 09:57:38.882331 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Apr 21 09:57:38.882339 kernel: DMA32 empty Apr 21 09:57:38.882345 kernel: Normal [mem 0x0000000100000000-0x0000000139ffffff] Apr 21 09:57:38.882352 kernel: Movable zone start for each node Apr 21 09:57:38.882358 kernel: Early memory node ranges Apr 21 09:57:38.882365 kernel: node 0: [mem 0x0000000040000000-0x000000013676ffff] Apr 21 09:57:38.882371 kernel: node 0: [mem 0x0000000136770000-0x0000000136b3ffff] Apr 21 09:57:38.882377 kernel: node 0: [mem 0x0000000136b40000-0x0000000139e1ffff] Apr 21 09:57:38.882384 kernel: node 0: [mem 0x0000000139e20000-0x0000000139eaffff] Apr 21 09:57:38.882390 kernel: node 0: [mem 0x0000000139eb0000-0x0000000139ebffff] Apr 21 09:57:38.882396 kernel: node 0: [mem 0x0000000139ec0000-0x0000000139fdffff] Apr 21 09:57:38.882403 kernel: node 0: [mem 0x0000000139fe0000-0x0000000139ffffff] Apr 21 09:57:38.882409 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x0000000139ffffff] Apr 21 09:57:38.882417 kernel: On node 0, zone Normal: 24576 pages in unavailable ranges Apr 21 09:57:38.882424 kernel: psci: probing for conduit method from ACPI. Apr 21 09:57:38.882430 kernel: psci: PSCIv1.1 detected in firmware. Apr 21 09:57:38.882439 kernel: psci: Using standard PSCI v0.2 function IDs Apr 21 09:57:38.882446 kernel: psci: Trusted OS migration not required Apr 21 09:57:38.882453 kernel: psci: SMC Calling Convention v1.1 Apr 21 09:57:38.882461 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Apr 21 09:57:38.882468 kernel: percpu: Embedded 30 pages/cpu s85736 r8192 d28952 u122880 Apr 21 09:57:38.882475 kernel: pcpu-alloc: s85736 r8192 d28952 u122880 alloc=30*4096 Apr 21 09:57:38.882482 kernel: pcpu-alloc: [0] 0 [0] 1 Apr 21 09:57:38.882488 kernel: Detected PIPT I-cache on CPU0 Apr 21 09:57:38.882495 kernel: CPU features: detected: GIC system register CPU interface Apr 21 09:57:38.882502 kernel: CPU features: detected: Hardware dirty bit management Apr 21 09:57:38.882509 kernel: CPU features: detected: Spectre-v4 Apr 21 09:57:38.882515 kernel: CPU features: detected: Spectre-BHB Apr 21 09:57:38.882522 kernel: CPU features: kernel page table isolation forced ON by KASLR Apr 21 09:57:38.882531 kernel: CPU features: detected: Kernel page table isolation (KPTI) Apr 21 09:57:38.882537 kernel: CPU features: detected: ARM erratum 1418040 Apr 21 09:57:38.882544 kernel: CPU features: detected: SSBS not fully self-synchronizing Apr 21 09:57:38.882551 kernel: alternatives: applying boot alternatives Apr 21 09:57:38.882559 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=406dfa58472aa4d4545d9757071aae8c3923de73d7e3cb8f6327066fa2449407 Apr 21 09:57:38.882566 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Apr 21 09:57:38.882573 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 21 09:57:38.882580 kernel: Fallback order for Node 0: 0 Apr 21 09:57:38.882587 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1008000 Apr 21 09:57:38.882594 kernel: Policy zone: Normal Apr 21 09:57:38.882600 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 21 09:57:38.882608 kernel: software IO TLB: area num 2. Apr 21 09:57:38.882615 kernel: software IO TLB: mapped [mem 0x00000000fbfff000-0x00000000fffff000] (64MB) Apr 21 09:57:38.882623 kernel: Memory: 3882824K/4096000K available (10304K kernel code, 2180K rwdata, 8116K rodata, 39424K init, 897K bss, 213176K reserved, 0K cma-reserved) Apr 21 09:57:38.882630 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Apr 21 09:57:38.882636 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 21 09:57:38.882644 kernel: rcu: RCU event tracing is enabled. Apr 21 09:57:38.882651 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Apr 21 09:57:38.882658 kernel: Trampoline variant of Tasks RCU enabled. Apr 21 09:57:38.882665 kernel: Tracing variant of Tasks RCU enabled. Apr 21 09:57:38.882671 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 21 09:57:38.882678 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Apr 21 09:57:38.882685 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Apr 21 09:57:38.882694 kernel: GICv3: 256 SPIs implemented Apr 21 09:57:38.882700 kernel: GICv3: 0 Extended SPIs implemented Apr 21 09:57:38.882707 kernel: Root IRQ handler: gic_handle_irq Apr 21 09:57:38.882714 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Apr 21 09:57:38.882721 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Apr 21 09:57:38.882728 kernel: ITS [mem 0x08080000-0x0809ffff] Apr 21 09:57:38.882735 kernel: ITS@0x0000000008080000: allocated 8192 Devices @1000c0000 (indirect, esz 8, psz 64K, shr 1) Apr 21 09:57:38.882742 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @1000d0000 (flat, esz 8, psz 64K, shr 1) Apr 21 09:57:38.882748 kernel: GICv3: using LPI property table @0x00000001000e0000 Apr 21 09:57:38.882755 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000001000f0000 Apr 21 09:57:38.882762 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 21 09:57:38.882770 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Apr 21 09:57:38.882777 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Apr 21 09:57:38.882784 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Apr 21 09:57:38.882791 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Apr 21 09:57:38.882798 kernel: Console: colour dummy device 80x25 Apr 21 09:57:38.882805 kernel: ACPI: Core revision 20230628 Apr 21 09:57:38.882812 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Apr 21 09:57:38.882840 kernel: pid_max: default: 32768 minimum: 301 Apr 21 09:57:38.882847 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Apr 21 09:57:38.882854 kernel: landlock: Up and running. Apr 21 09:57:38.882863 kernel: SELinux: Initializing. Apr 21 09:57:38.882870 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 21 09:57:38.882877 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 21 09:57:38.882884 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 21 09:57:38.882891 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 21 09:57:38.882898 kernel: rcu: Hierarchical SRCU implementation. Apr 21 09:57:38.882906 kernel: rcu: Max phase no-delay instances is 400. Apr 21 09:57:38.882913 kernel: Platform MSI: ITS@0x8080000 domain created Apr 21 09:57:38.882919 kernel: PCI/MSI: ITS@0x8080000 domain created Apr 21 09:57:38.882928 kernel: Remapping and enabling EFI services. Apr 21 09:57:38.882935 kernel: smp: Bringing up secondary CPUs ... Apr 21 09:57:38.882942 kernel: Detected PIPT I-cache on CPU1 Apr 21 09:57:38.882949 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Apr 21 09:57:38.882956 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000100100000 Apr 21 09:57:38.882963 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Apr 21 09:57:38.882970 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Apr 21 09:57:38.882977 kernel: smp: Brought up 1 node, 2 CPUs Apr 21 09:57:38.882984 kernel: SMP: Total of 2 processors activated. Apr 21 09:57:38.882990 kernel: CPU features: detected: 32-bit EL0 Support Apr 21 09:57:38.882999 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Apr 21 09:57:38.883006 kernel: CPU features: detected: Common not Private translations Apr 21 09:57:38.883018 kernel: CPU features: detected: CRC32 instructions Apr 21 09:57:38.883027 kernel: CPU features: detected: Enhanced Virtualization Traps Apr 21 09:57:38.883034 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Apr 21 09:57:38.883042 kernel: CPU features: detected: LSE atomic instructions Apr 21 09:57:38.883049 kernel: CPU features: detected: Privileged Access Never Apr 21 09:57:38.883056 kernel: CPU features: detected: RAS Extension Support Apr 21 09:57:38.883065 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Apr 21 09:57:38.883073 kernel: CPU: All CPU(s) started at EL1 Apr 21 09:57:38.883080 kernel: alternatives: applying system-wide alternatives Apr 21 09:57:38.883087 kernel: devtmpfs: initialized Apr 21 09:57:38.883095 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 21 09:57:38.883102 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Apr 21 09:57:38.883110 kernel: pinctrl core: initialized pinctrl subsystem Apr 21 09:57:38.883118 kernel: SMBIOS 3.0.0 present. Apr 21 09:57:38.883156 kernel: DMI: Hetzner vServer/KVM Virtual Machine, BIOS 20171111 11/11/2017 Apr 21 09:57:38.883167 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 21 09:57:38.883174 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Apr 21 09:57:38.883181 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Apr 21 09:57:38.883189 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Apr 21 09:57:38.883196 kernel: audit: initializing netlink subsys (disabled) Apr 21 09:57:38.883204 kernel: audit: type=2000 audit(0.011:1): state=initialized audit_enabled=0 res=1 Apr 21 09:57:38.883211 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 21 09:57:38.883218 kernel: cpuidle: using governor menu Apr 21 09:57:38.883228 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Apr 21 09:57:38.883235 kernel: ASID allocator initialised with 32768 entries Apr 21 09:57:38.883243 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 21 09:57:38.883250 kernel: Serial: AMBA PL011 UART driver Apr 21 09:57:38.883257 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Apr 21 09:57:38.883265 kernel: Modules: 0 pages in range for non-PLT usage Apr 21 09:57:38.883272 kernel: Modules: 509008 pages in range for PLT usage Apr 21 09:57:38.883279 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 21 09:57:38.883287 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Apr 21 09:57:38.883296 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Apr 21 09:57:38.883303 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Apr 21 09:57:38.883311 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 21 09:57:38.883318 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Apr 21 09:57:38.883326 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Apr 21 09:57:38.883333 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Apr 21 09:57:38.883340 kernel: ACPI: Added _OSI(Module Device) Apr 21 09:57:38.883366 kernel: ACPI: Added _OSI(Processor Device) Apr 21 09:57:38.883374 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 21 09:57:38.883383 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Apr 21 09:57:38.883391 kernel: ACPI: Interpreter enabled Apr 21 09:57:38.883398 kernel: ACPI: Using GIC for interrupt routing Apr 21 09:57:38.883405 kernel: ACPI: MCFG table detected, 1 entries Apr 21 09:57:38.883413 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Apr 21 09:57:38.883420 kernel: printk: console [ttyAMA0] enabled Apr 21 09:57:38.883427 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Apr 21 09:57:38.883582 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Apr 21 09:57:38.883661 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Apr 21 09:57:38.883728 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Apr 21 09:57:38.883793 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Apr 21 09:57:38.883897 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Apr 21 09:57:38.883909 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Apr 21 09:57:38.883916 kernel: PCI host bridge to bus 0000:00 Apr 21 09:57:38.883990 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Apr 21 09:57:38.884051 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Apr 21 09:57:38.884115 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Apr 21 09:57:38.884193 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Apr 21 09:57:38.884277 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Apr 21 09:57:38.884356 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x038000 Apr 21 09:57:38.884424 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x11289000-0x11289fff] Apr 21 09:57:38.884491 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000600000-0x8000603fff 64bit pref] Apr 21 09:57:38.884571 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Apr 21 09:57:38.884640 kernel: pci 0000:00:02.0: reg 0x10: [mem 0x11288000-0x11288fff] Apr 21 09:57:38.884720 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Apr 21 09:57:38.884788 kernel: pci 0000:00:02.1: reg 0x10: [mem 0x11287000-0x11287fff] Apr 21 09:57:38.884904 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Apr 21 09:57:38.884972 kernel: pci 0000:00:02.2: reg 0x10: [mem 0x11286000-0x11286fff] Apr 21 09:57:38.885051 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Apr 21 09:57:38.885118 kernel: pci 0000:00:02.3: reg 0x10: [mem 0x11285000-0x11285fff] Apr 21 09:57:38.885240 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Apr 21 09:57:38.885313 kernel: pci 0000:00:02.4: reg 0x10: [mem 0x11284000-0x11284fff] Apr 21 09:57:38.885386 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Apr 21 09:57:38.885453 kernel: pci 0000:00:02.5: reg 0x10: [mem 0x11283000-0x11283fff] Apr 21 09:57:38.885532 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Apr 21 09:57:38.885599 kernel: pci 0000:00:02.6: reg 0x10: [mem 0x11282000-0x11282fff] Apr 21 09:57:38.885671 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Apr 21 09:57:38.885739 kernel: pci 0000:00:02.7: reg 0x10: [mem 0x11281000-0x11281fff] Apr 21 09:57:38.885833 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 Apr 21 09:57:38.885909 kernel: pci 0000:00:03.0: reg 0x10: [mem 0x11280000-0x11280fff] Apr 21 09:57:38.885993 kernel: pci 0000:00:04.0: [1b36:0002] type 00 class 0x070002 Apr 21 09:57:38.886059 kernel: pci 0000:00:04.0: reg 0x10: [io 0x0000-0x0007] Apr 21 09:57:38.886147 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 Apr 21 09:57:38.886224 kernel: pci 0000:01:00.0: reg 0x14: [mem 0x11000000-0x11000fff] Apr 21 09:57:38.886295 kernel: pci 0000:01:00.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Apr 21 09:57:38.886364 kernel: pci 0000:01:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Apr 21 09:57:38.886440 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 Apr 21 09:57:38.886513 kernel: pci 0000:02:00.0: reg 0x10: [mem 0x10e00000-0x10e03fff 64bit] Apr 21 09:57:38.886592 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 Apr 21 09:57:38.886662 kernel: pci 0000:03:00.0: reg 0x14: [mem 0x10c00000-0x10c00fff] Apr 21 09:57:38.886731 kernel: pci 0000:03:00.0: reg 0x20: [mem 0x8000100000-0x8000103fff 64bit pref] Apr 21 09:57:38.886807 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 Apr 21 09:57:38.886908 kernel: pci 0000:04:00.0: reg 0x20: [mem 0x8000200000-0x8000203fff 64bit pref] Apr 21 09:57:38.886987 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 Apr 21 09:57:38.887064 kernel: pci 0000:05:00.0: reg 0x14: [mem 0x10800000-0x10800fff] Apr 21 09:57:38.887151 kernel: pci 0000:05:00.0: reg 0x20: [mem 0x8000300000-0x8000303fff 64bit pref] Apr 21 09:57:38.887237 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 Apr 21 09:57:38.887308 kernel: pci 0000:06:00.0: reg 0x14: [mem 0x10600000-0x10600fff] Apr 21 09:57:38.887378 kernel: pci 0000:06:00.0: reg 0x20: [mem 0x8000400000-0x8000403fff 64bit pref] Apr 21 09:57:38.887459 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 Apr 21 09:57:38.887529 kernel: pci 0000:07:00.0: reg 0x14: [mem 0x10400000-0x10400fff] Apr 21 09:57:38.887598 kernel: pci 0000:07:00.0: reg 0x20: [mem 0x8000500000-0x8000503fff 64bit pref] Apr 21 09:57:38.887667 kernel: pci 0000:07:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Apr 21 09:57:38.887736 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Apr 21 09:57:38.887803 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 01] add_size 100000 add_align 100000 Apr 21 09:57:38.887916 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff] to [bus 01] add_size 100000 add_align 100000 Apr 21 09:57:38.887992 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Apr 21 09:57:38.888059 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Apr 21 09:57:38.888123 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x001fffff] to [bus 02] add_size 100000 add_align 100000 Apr 21 09:57:38.888208 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Apr 21 09:57:38.888274 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 03] add_size 100000 add_align 100000 Apr 21 09:57:38.888340 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff] to [bus 03] add_size 100000 add_align 100000 Apr 21 09:57:38.888407 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Apr 21 09:57:38.888472 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 04] add_size 100000 add_align 100000 Apr 21 09:57:38.888541 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 Apr 21 09:57:38.888608 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Apr 21 09:57:38.888673 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 05] add_size 100000 add_align 100000 Apr 21 09:57:38.888739 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff] to [bus 05] add_size 100000 add_align 100000 Apr 21 09:57:38.888807 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Apr 21 09:57:38.888891 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 06] add_size 100000 add_align 100000 Apr 21 09:57:38.888992 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff] to [bus 06] add_size 100000 add_align 100000 Apr 21 09:57:38.889071 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Apr 21 09:57:38.889188 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 07] add_size 100000 add_align 100000 Apr 21 09:57:38.889266 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff] to [bus 07] add_size 100000 add_align 100000 Apr 21 09:57:38.889362 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Apr 21 09:57:38.889433 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 08] add_size 200000 add_align 100000 Apr 21 09:57:38.889499 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff] to [bus 08] add_size 200000 add_align 100000 Apr 21 09:57:38.889567 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Apr 21 09:57:38.889634 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 09] add_size 200000 add_align 100000 Apr 21 09:57:38.889706 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 09] add_size 200000 add_align 100000 Apr 21 09:57:38.889774 kernel: pci 0000:00:02.0: BAR 14: assigned [mem 0x10000000-0x101fffff] Apr 21 09:57:38.889948 kernel: pci 0000:00:02.0: BAR 15: assigned [mem 0x8000000000-0x80001fffff 64bit pref] Apr 21 09:57:38.890023 kernel: pci 0000:00:02.1: BAR 14: assigned [mem 0x10200000-0x103fffff] Apr 21 09:57:38.890092 kernel: pci 0000:00:02.1: BAR 15: assigned [mem 0x8000200000-0x80003fffff 64bit pref] Apr 21 09:57:38.890190 kernel: pci 0000:00:02.2: BAR 14: assigned [mem 0x10400000-0x105fffff] Apr 21 09:57:38.890269 kernel: pci 0000:00:02.2: BAR 15: assigned [mem 0x8000400000-0x80005fffff 64bit pref] Apr 21 09:57:38.890355 kernel: pci 0000:00:02.3: BAR 14: assigned [mem 0x10600000-0x107fffff] Apr 21 09:57:38.890430 kernel: pci 0000:00:02.3: BAR 15: assigned [mem 0x8000600000-0x80007fffff 64bit pref] Apr 21 09:57:38.890497 kernel: pci 0000:00:02.4: BAR 14: assigned [mem 0x10800000-0x109fffff] Apr 21 09:57:38.890565 kernel: pci 0000:00:02.4: BAR 15: assigned [mem 0x8000800000-0x80009fffff 64bit pref] Apr 21 09:57:38.890638 kernel: pci 0000:00:02.5: BAR 14: assigned [mem 0x10a00000-0x10bfffff] Apr 21 09:57:38.890702 kernel: pci 0000:00:02.5: BAR 15: assigned [mem 0x8000a00000-0x8000bfffff 64bit pref] Apr 21 09:57:38.890771 kernel: pci 0000:00:02.6: BAR 14: assigned [mem 0x10c00000-0x10dfffff] Apr 21 09:57:38.890920 kernel: pci 0000:00:02.6: BAR 15: assigned [mem 0x8000c00000-0x8000dfffff 64bit pref] Apr 21 09:57:38.890990 kernel: pci 0000:00:02.7: BAR 14: assigned [mem 0x10e00000-0x10ffffff] Apr 21 09:57:38.891057 kernel: pci 0000:00:02.7: BAR 15: assigned [mem 0x8000e00000-0x8000ffffff 64bit pref] Apr 21 09:57:38.891144 kernel: pci 0000:00:03.0: BAR 14: assigned [mem 0x11000000-0x111fffff] Apr 21 09:57:38.891215 kernel: pci 0000:00:03.0: BAR 15: assigned [mem 0x8001000000-0x80011fffff 64bit pref] Apr 21 09:57:38.891286 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8001200000-0x8001203fff 64bit pref] Apr 21 09:57:38.891358 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x11200000-0x11200fff] Apr 21 09:57:38.891424 kernel: pci 0000:00:02.0: BAR 0: assigned [mem 0x11201000-0x11201fff] Apr 21 09:57:38.891489 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] Apr 21 09:57:38.891554 kernel: pci 0000:00:02.1: BAR 0: assigned [mem 0x11202000-0x11202fff] Apr 21 09:57:38.891619 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] Apr 21 09:57:38.891688 kernel: pci 0000:00:02.2: BAR 0: assigned [mem 0x11203000-0x11203fff] Apr 21 09:57:38.891753 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] Apr 21 09:57:38.891844 kernel: pci 0000:00:02.3: BAR 0: assigned [mem 0x11204000-0x11204fff] Apr 21 09:57:38.891924 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] Apr 21 09:57:38.891992 kernel: pci 0000:00:02.4: BAR 0: assigned [mem 0x11205000-0x11205fff] Apr 21 09:57:38.892058 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] Apr 21 09:57:38.892125 kernel: pci 0000:00:02.5: BAR 0: assigned [mem 0x11206000-0x11206fff] Apr 21 09:57:38.892246 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] Apr 21 09:57:38.892316 kernel: pci 0000:00:02.6: BAR 0: assigned [mem 0x11207000-0x11207fff] Apr 21 09:57:38.892381 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] Apr 21 09:57:38.892449 kernel: pci 0000:00:02.7: BAR 0: assigned [mem 0x11208000-0x11208fff] Apr 21 09:57:38.892539 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] Apr 21 09:57:38.892629 kernel: pci 0000:00:03.0: BAR 0: assigned [mem 0x11209000-0x11209fff] Apr 21 09:57:38.892700 kernel: pci 0000:00:03.0: BAR 13: assigned [io 0x9000-0x9fff] Apr 21 09:57:38.892923 kernel: pci 0000:00:04.0: BAR 0: assigned [io 0xa000-0xa007] Apr 21 09:57:38.893015 kernel: pci 0000:01:00.0: BAR 6: assigned [mem 0x10000000-0x1007ffff pref] Apr 21 09:57:38.893111 kernel: pci 0000:01:00.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Apr 21 09:57:38.893204 kernel: pci 0000:01:00.0: BAR 1: assigned [mem 0x10080000-0x10080fff] Apr 21 09:57:38.893278 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Apr 21 09:57:38.893355 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Apr 21 09:57:38.893421 kernel: pci 0000:00:02.0: bridge window [mem 0x10000000-0x101fffff] Apr 21 09:57:38.893489 kernel: pci 0000:00:02.0: bridge window [mem 0x8000000000-0x80001fffff 64bit pref] Apr 21 09:57:38.893564 kernel: pci 0000:02:00.0: BAR 0: assigned [mem 0x10200000-0x10203fff 64bit] Apr 21 09:57:38.893634 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Apr 21 09:57:38.894175 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Apr 21 09:57:38.894263 kernel: pci 0000:00:02.1: bridge window [mem 0x10200000-0x103fffff] Apr 21 09:57:38.894329 kernel: pci 0000:00:02.1: bridge window [mem 0x8000200000-0x80003fffff 64bit pref] Apr 21 09:57:38.894403 kernel: pci 0000:03:00.0: BAR 4: assigned [mem 0x8000400000-0x8000403fff 64bit pref] Apr 21 09:57:38.894471 kernel: pci 0000:03:00.0: BAR 1: assigned [mem 0x10400000-0x10400fff] Apr 21 09:57:38.894537 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Apr 21 09:57:38.894603 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Apr 21 09:57:38.894677 kernel: pci 0000:00:02.2: bridge window [mem 0x10400000-0x105fffff] Apr 21 09:57:38.894744 kernel: pci 0000:00:02.2: bridge window [mem 0x8000400000-0x80005fffff 64bit pref] Apr 21 09:57:38.895248 kernel: pci 0000:04:00.0: BAR 4: assigned [mem 0x8000600000-0x8000603fff 64bit pref] Apr 21 09:57:38.895399 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Apr 21 09:57:38.895474 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Apr 21 09:57:38.895542 kernel: pci 0000:00:02.3: bridge window [mem 0x10600000-0x107fffff] Apr 21 09:57:38.895612 kernel: pci 0000:00:02.3: bridge window [mem 0x8000600000-0x80007fffff 64bit pref] Apr 21 09:57:38.895694 kernel: pci 0000:05:00.0: BAR 4: assigned [mem 0x8000800000-0x8000803fff 64bit pref] Apr 21 09:57:38.895773 kernel: pci 0000:05:00.0: BAR 1: assigned [mem 0x10800000-0x10800fff] Apr 21 09:57:38.895857 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Apr 21 09:57:38.895955 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Apr 21 09:57:38.896035 kernel: pci 0000:00:02.4: bridge window [mem 0x10800000-0x109fffff] Apr 21 09:57:38.896103 kernel: pci 0000:00:02.4: bridge window [mem 0x8000800000-0x80009fffff 64bit pref] Apr 21 09:57:38.896198 kernel: pci 0000:06:00.0: BAR 4: assigned [mem 0x8000a00000-0x8000a03fff 64bit pref] Apr 21 09:57:38.896289 kernel: pci 0000:06:00.0: BAR 1: assigned [mem 0x10a00000-0x10a00fff] Apr 21 09:57:38.896361 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Apr 21 09:57:38.896435 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Apr 21 09:57:38.896502 kernel: pci 0000:00:02.5: bridge window [mem 0x10a00000-0x10bfffff] Apr 21 09:57:38.896569 kernel: pci 0000:00:02.5: bridge window [mem 0x8000a00000-0x8000bfffff 64bit pref] Apr 21 09:57:38.896647 kernel: pci 0000:07:00.0: BAR 6: assigned [mem 0x10c00000-0x10c7ffff pref] Apr 21 09:57:38.896716 kernel: pci 0000:07:00.0: BAR 4: assigned [mem 0x8000c00000-0x8000c03fff 64bit pref] Apr 21 09:57:38.896788 kernel: pci 0000:07:00.0: BAR 1: assigned [mem 0x10c80000-0x10c80fff] Apr 21 09:57:38.896934 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Apr 21 09:57:38.897007 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Apr 21 09:57:38.897079 kernel: pci 0000:00:02.6: bridge window [mem 0x10c00000-0x10dfffff] Apr 21 09:57:38.897979 kernel: pci 0000:00:02.6: bridge window [mem 0x8000c00000-0x8000dfffff 64bit pref] Apr 21 09:57:38.898080 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Apr 21 09:57:38.898169 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Apr 21 09:57:38.898240 kernel: pci 0000:00:02.7: bridge window [mem 0x10e00000-0x10ffffff] Apr 21 09:57:38.898307 kernel: pci 0000:00:02.7: bridge window [mem 0x8000e00000-0x8000ffffff 64bit pref] Apr 21 09:57:38.898375 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Apr 21 09:57:38.898441 kernel: pci 0000:00:03.0: bridge window [io 0x9000-0x9fff] Apr 21 09:57:38.898516 kernel: pci 0000:00:03.0: bridge window [mem 0x11000000-0x111fffff] Apr 21 09:57:38.898604 kernel: pci 0000:00:03.0: bridge window [mem 0x8001000000-0x80011fffff 64bit pref] Apr 21 09:57:38.898682 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Apr 21 09:57:38.898742 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Apr 21 09:57:38.898800 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Apr 21 09:57:38.898883 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Apr 21 09:57:38.898948 kernel: pci_bus 0000:01: resource 1 [mem 0x10000000-0x101fffff] Apr 21 09:57:38.899014 kernel: pci_bus 0000:01: resource 2 [mem 0x8000000000-0x80001fffff 64bit pref] Apr 21 09:57:38.899090 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x2fff] Apr 21 09:57:38.899172 kernel: pci_bus 0000:02: resource 1 [mem 0x10200000-0x103fffff] Apr 21 09:57:38.899236 kernel: pci_bus 0000:02: resource 2 [mem 0x8000200000-0x80003fffff 64bit pref] Apr 21 09:57:38.899305 kernel: pci_bus 0000:03: resource 0 [io 0x3000-0x3fff] Apr 21 09:57:38.899368 kernel: pci_bus 0000:03: resource 1 [mem 0x10400000-0x105fffff] Apr 21 09:57:38.899432 kernel: pci_bus 0000:03: resource 2 [mem 0x8000400000-0x80005fffff 64bit pref] Apr 21 09:57:38.899502 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] Apr 21 09:57:38.899563 kernel: pci_bus 0000:04: resource 1 [mem 0x10600000-0x107fffff] Apr 21 09:57:38.899639 kernel: pci_bus 0000:04: resource 2 [mem 0x8000600000-0x80007fffff 64bit pref] Apr 21 09:57:38.899708 kernel: pci_bus 0000:05: resource 0 [io 0x5000-0x5fff] Apr 21 09:57:38.899769 kernel: pci_bus 0000:05: resource 1 [mem 0x10800000-0x109fffff] Apr 21 09:57:38.901951 kernel: pci_bus 0000:05: resource 2 [mem 0x8000800000-0x80009fffff 64bit pref] Apr 21 09:57:38.902061 kernel: pci_bus 0000:06: resource 0 [io 0x6000-0x6fff] Apr 21 09:57:38.902144 kernel: pci_bus 0000:06: resource 1 [mem 0x10a00000-0x10bfffff] Apr 21 09:57:38.902224 kernel: pci_bus 0000:06: resource 2 [mem 0x8000a00000-0x8000bfffff 64bit pref] Apr 21 09:57:38.902297 kernel: pci_bus 0000:07: resource 0 [io 0x7000-0x7fff] Apr 21 09:57:38.902365 kernel: pci_bus 0000:07: resource 1 [mem 0x10c00000-0x10dfffff] Apr 21 09:57:38.902427 kernel: pci_bus 0000:07: resource 2 [mem 0x8000c00000-0x8000dfffff 64bit pref] Apr 21 09:57:38.902497 kernel: pci_bus 0000:08: resource 0 [io 0x8000-0x8fff] Apr 21 09:57:38.902561 kernel: pci_bus 0000:08: resource 1 [mem 0x10e00000-0x10ffffff] Apr 21 09:57:38.902628 kernel: pci_bus 0000:08: resource 2 [mem 0x8000e00000-0x8000ffffff 64bit pref] Apr 21 09:57:38.902699 kernel: pci_bus 0000:09: resource 0 [io 0x9000-0x9fff] Apr 21 09:57:38.902763 kernel: pci_bus 0000:09: resource 1 [mem 0x11000000-0x111fffff] Apr 21 09:57:38.902901 kernel: pci_bus 0000:09: resource 2 [mem 0x8001000000-0x80011fffff 64bit pref] Apr 21 09:57:38.902916 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Apr 21 09:57:38.902924 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Apr 21 09:57:38.902932 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Apr 21 09:57:38.902940 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Apr 21 09:57:38.902948 kernel: iommu: Default domain type: Translated Apr 21 09:57:38.902956 kernel: iommu: DMA domain TLB invalidation policy: strict mode Apr 21 09:57:38.902964 kernel: efivars: Registered efivars operations Apr 21 09:57:38.902972 kernel: vgaarb: loaded Apr 21 09:57:38.902984 kernel: clocksource: Switched to clocksource arch_sys_counter Apr 21 09:57:38.902991 kernel: VFS: Disk quotas dquot_6.6.0 Apr 21 09:57:38.902999 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 21 09:57:38.903009 kernel: pnp: PnP ACPI init Apr 21 09:57:38.903096 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Apr 21 09:57:38.903109 kernel: pnp: PnP ACPI: found 1 devices Apr 21 09:57:38.903117 kernel: NET: Registered PF_INET protocol family Apr 21 09:57:38.903163 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 21 09:57:38.903179 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Apr 21 09:57:38.903187 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 21 09:57:38.903195 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Apr 21 09:57:38.903203 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Apr 21 09:57:38.903211 kernel: TCP: Hash tables configured (established 32768 bind 32768) Apr 21 09:57:38.903218 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 21 09:57:38.903227 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 21 09:57:38.903234 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 21 09:57:38.903325 kernel: pci 0000:02:00.0: enabling device (0000 -> 0002) Apr 21 09:57:38.903342 kernel: PCI: CLS 0 bytes, default 64 Apr 21 09:57:38.903350 kernel: kvm [1]: HYP mode not available Apr 21 09:57:38.903358 kernel: Initialise system trusted keyrings Apr 21 09:57:38.903365 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Apr 21 09:57:38.903373 kernel: Key type asymmetric registered Apr 21 09:57:38.903381 kernel: Asymmetric key parser 'x509' registered Apr 21 09:57:38.903389 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Apr 21 09:57:38.903397 kernel: io scheduler mq-deadline registered Apr 21 09:57:38.903404 kernel: io scheduler kyber registered Apr 21 09:57:38.903414 kernel: io scheduler bfq registered Apr 21 09:57:38.903423 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Apr 21 09:57:38.903496 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 50 Apr 21 09:57:38.903565 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 50 Apr 21 09:57:38.903632 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 21 09:57:38.903702 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 51 Apr 21 09:57:38.903770 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 51 Apr 21 09:57:38.906004 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 21 09:57:38.906097 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 52 Apr 21 09:57:38.906190 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 52 Apr 21 09:57:38.906261 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 21 09:57:38.906333 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 53 Apr 21 09:57:38.906400 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 53 Apr 21 09:57:38.906474 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 21 09:57:38.906543 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 54 Apr 21 09:57:38.906609 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 54 Apr 21 09:57:38.906675 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 21 09:57:38.906747 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 55 Apr 21 09:57:38.906826 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 55 Apr 21 09:57:38.907983 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 21 09:57:38.908056 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 56 Apr 21 09:57:38.908159 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 56 Apr 21 09:57:38.908244 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 21 09:57:38.908322 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 57 Apr 21 09:57:38.908391 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 57 Apr 21 09:57:38.908464 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 21 09:57:38.908476 kernel: ACPI: \_SB_.PCI0.GSI3: Enabled at IRQ 38 Apr 21 09:57:38.908543 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 58 Apr 21 09:57:38.908611 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 58 Apr 21 09:57:38.908679 kernel: pcieport 0000:00:03.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 21 09:57:38.908689 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Apr 21 09:57:38.908701 kernel: ACPI: button: Power Button [PWRB] Apr 21 09:57:38.908709 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Apr 21 09:57:38.908781 kernel: virtio-pci 0000:04:00.0: enabling device (0000 -> 0002) Apr 21 09:57:38.909949 kernel: virtio-pci 0000:07:00.0: enabling device (0000 -> 0002) Apr 21 09:57:38.909968 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 21 09:57:38.909977 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Apr 21 09:57:38.910066 kernel: serial 0000:00:04.0: enabling device (0000 -> 0001) Apr 21 09:57:38.910081 kernel: 0000:00:04.0: ttyS0 at I/O 0xa000 (irq = 45, base_baud = 115200) is a 16550A Apr 21 09:57:38.910093 kernel: thunder_xcv, ver 1.0 Apr 21 09:57:38.910107 kernel: thunder_bgx, ver 1.0 Apr 21 09:57:38.910115 kernel: nicpf, ver 1.0 Apr 21 09:57:38.910123 kernel: nicvf, ver 1.0 Apr 21 09:57:38.910230 kernel: rtc-efi rtc-efi.0: registered as rtc0 Apr 21 09:57:38.910311 kernel: rtc-efi rtc-efi.0: setting system clock to 2026-04-21T09:57:38 UTC (1776765458) Apr 21 09:57:38.910322 kernel: hid: raw HID events driver (C) Jiri Kosina Apr 21 09:57:38.910330 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Apr 21 09:57:38.910338 kernel: watchdog: Delayed init of the lockup detector failed: -19 Apr 21 09:57:38.910349 kernel: watchdog: Hard watchdog permanently disabled Apr 21 09:57:38.910357 kernel: NET: Registered PF_INET6 protocol family Apr 21 09:57:38.910365 kernel: Segment Routing with IPv6 Apr 21 09:57:38.910373 kernel: In-situ OAM (IOAM) with IPv6 Apr 21 09:57:38.910380 kernel: NET: Registered PF_PACKET protocol family Apr 21 09:57:38.910388 kernel: Key type dns_resolver registered Apr 21 09:57:38.910396 kernel: registered taskstats version 1 Apr 21 09:57:38.910404 kernel: Loading compiled-in X.509 certificates Apr 21 09:57:38.910412 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: 3383becb6d31527ac15d01269e47e8fdf1030cd4' Apr 21 09:57:38.910421 kernel: Key type .fscrypt registered Apr 21 09:57:38.910429 kernel: Key type fscrypt-provisioning registered Apr 21 09:57:38.910437 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 21 09:57:38.910444 kernel: ima: Allocated hash algorithm: sha1 Apr 21 09:57:38.910452 kernel: ima: No architecture policies found Apr 21 09:57:38.910460 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Apr 21 09:57:38.910468 kernel: clk: Disabling unused clocks Apr 21 09:57:38.910476 kernel: Freeing unused kernel memory: 39424K Apr 21 09:57:38.910484 kernel: Run /init as init process Apr 21 09:57:38.910491 kernel: with arguments: Apr 21 09:57:38.910501 kernel: /init Apr 21 09:57:38.910508 kernel: with environment: Apr 21 09:57:38.910516 kernel: HOME=/ Apr 21 09:57:38.910523 kernel: TERM=linux Apr 21 09:57:38.910533 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 21 09:57:38.910543 systemd[1]: Detected virtualization kvm. Apr 21 09:57:38.910551 systemd[1]: Detected architecture arm64. Apr 21 09:57:38.910561 systemd[1]: Running in initrd. Apr 21 09:57:38.910569 systemd[1]: No hostname configured, using default hostname. Apr 21 09:57:38.910577 systemd[1]: Hostname set to . Apr 21 09:57:38.910585 systemd[1]: Initializing machine ID from VM UUID. Apr 21 09:57:38.910593 systemd[1]: Queued start job for default target initrd.target. Apr 21 09:57:38.910602 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 21 09:57:38.910610 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 21 09:57:38.910619 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 21 09:57:38.910629 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 21 09:57:38.910638 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 21 09:57:38.910646 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 21 09:57:38.910656 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 21 09:57:38.910665 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 21 09:57:38.910673 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 21 09:57:38.910682 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 21 09:57:38.910692 systemd[1]: Reached target paths.target - Path Units. Apr 21 09:57:38.910700 systemd[1]: Reached target slices.target - Slice Units. Apr 21 09:57:38.910708 systemd[1]: Reached target swap.target - Swaps. Apr 21 09:57:38.910717 systemd[1]: Reached target timers.target - Timer Units. Apr 21 09:57:38.910725 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 21 09:57:38.910733 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 21 09:57:38.910741 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 21 09:57:38.910750 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 21 09:57:38.910759 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 21 09:57:38.910769 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 21 09:57:38.910777 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 21 09:57:38.910786 systemd[1]: Reached target sockets.target - Socket Units. Apr 21 09:57:38.910794 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 21 09:57:38.910803 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 21 09:57:38.911193 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 21 09:57:38.911212 systemd[1]: Starting systemd-fsck-usr.service... Apr 21 09:57:38.911221 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 21 09:57:38.911234 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 21 09:57:38.911242 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 21 09:57:38.911250 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 21 09:57:38.911288 systemd-journald[236]: Collecting audit messages is disabled. Apr 21 09:57:38.911319 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 21 09:57:38.911327 systemd[1]: Finished systemd-fsck-usr.service. Apr 21 09:57:38.911337 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 21 09:57:38.911345 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 21 09:57:38.911357 systemd-journald[236]: Journal started Apr 21 09:57:38.911376 systemd-journald[236]: Runtime Journal (/run/log/journal/23fbff7cc6ed4f04935f08dcb063b7e0) is 8.0M, max 76.6M, 68.6M free. Apr 21 09:57:38.893878 systemd-modules-load[237]: Inserted module 'overlay' Apr 21 09:57:38.914548 systemd[1]: Started systemd-journald.service - Journal Service. Apr 21 09:57:38.914570 kernel: Bridge firewalling registered Apr 21 09:57:38.914463 systemd-modules-load[237]: Inserted module 'br_netfilter' Apr 21 09:57:38.925172 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 21 09:57:38.928871 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 21 09:57:38.929874 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 21 09:57:38.932851 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 21 09:57:38.940049 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 21 09:57:38.942515 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 21 09:57:38.946070 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 21 09:57:38.949909 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 21 09:57:38.963296 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 21 09:57:38.974018 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 21 09:57:38.975295 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 21 09:57:38.977606 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 21 09:57:38.986276 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 21 09:57:38.998953 dracut-cmdline[275]: dracut-dracut-053 Apr 21 09:57:39.003531 systemd-resolved[272]: Positive Trust Anchors: Apr 21 09:57:39.003546 systemd-resolved[272]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 21 09:57:39.003578 systemd-resolved[272]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 21 09:57:39.013756 dracut-cmdline[275]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=406dfa58472aa4d4545d9757071aae8c3923de73d7e3cb8f6327066fa2449407 Apr 21 09:57:39.008535 systemd-resolved[272]: Defaulting to hostname 'linux'. Apr 21 09:57:39.017020 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 21 09:57:39.018180 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 21 09:57:39.104874 kernel: SCSI subsystem initialized Apr 21 09:57:39.109860 kernel: Loading iSCSI transport class v2.0-870. Apr 21 09:57:39.117904 kernel: iscsi: registered transport (tcp) Apr 21 09:57:39.132179 kernel: iscsi: registered transport (qla4xxx) Apr 21 09:57:39.132265 kernel: QLogic iSCSI HBA Driver Apr 21 09:57:39.182629 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 21 09:57:39.191141 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 21 09:57:39.211991 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 21 09:57:39.212118 kernel: device-mapper: uevent: version 1.0.3 Apr 21 09:57:39.212164 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Apr 21 09:57:39.262876 kernel: raid6: neonx8 gen() 15638 MB/s Apr 21 09:57:39.279866 kernel: raid6: neonx4 gen() 15555 MB/s Apr 21 09:57:39.296910 kernel: raid6: neonx2 gen() 13167 MB/s Apr 21 09:57:39.313877 kernel: raid6: neonx1 gen() 10434 MB/s Apr 21 09:57:39.330901 kernel: raid6: int64x8 gen() 6893 MB/s Apr 21 09:57:39.347876 kernel: raid6: int64x4 gen() 7270 MB/s Apr 21 09:57:39.364893 kernel: raid6: int64x2 gen() 6095 MB/s Apr 21 09:57:39.381875 kernel: raid6: int64x1 gen() 5033 MB/s Apr 21 09:57:39.381957 kernel: raid6: using algorithm neonx8 gen() 15638 MB/s Apr 21 09:57:39.398911 kernel: raid6: .... xor() 11930 MB/s, rmw enabled Apr 21 09:57:39.398992 kernel: raid6: using neon recovery algorithm Apr 21 09:57:39.404059 kernel: xor: measuring software checksum speed Apr 21 09:57:39.404153 kernel: 8regs : 19783 MB/sec Apr 21 09:57:39.404178 kernel: 32regs : 19660 MB/sec Apr 21 09:57:39.404199 kernel: arm64_neon : 25632 MB/sec Apr 21 09:57:39.404852 kernel: xor: using function: arm64_neon (25632 MB/sec) Apr 21 09:57:39.455892 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 21 09:57:39.470791 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 21 09:57:39.477162 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 21 09:57:39.491921 systemd-udevd[456]: Using default interface naming scheme 'v255'. Apr 21 09:57:39.495320 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 21 09:57:39.505271 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 21 09:57:39.521065 dracut-pre-trigger[458]: rd.md=0: removing MD RAID activation Apr 21 09:57:39.559063 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 21 09:57:39.570201 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 21 09:57:39.619223 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 21 09:57:39.627154 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 21 09:57:39.647784 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 21 09:57:39.648482 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 21 09:57:39.649980 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 21 09:57:39.650569 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 21 09:57:39.658987 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 21 09:57:39.677918 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 21 09:57:39.733978 kernel: scsi host0: Virtio SCSI HBA Apr 21 09:57:39.734205 kernel: ACPI: bus type USB registered Apr 21 09:57:39.739835 kernel: usbcore: registered new interface driver usbfs Apr 21 09:57:39.739905 kernel: usbcore: registered new interface driver hub Apr 21 09:57:39.739916 kernel: usbcore: registered new device driver usb Apr 21 09:57:39.743972 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 21 09:57:39.744105 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 21 09:57:39.747035 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 21 09:57:39.747607 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 21 09:57:39.747756 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 21 09:57:39.748525 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 21 09:57:39.755167 kernel: scsi 0:0:0:0: CD-ROM QEMU QEMU CD-ROM 2.5+ PQ: 0 ANSI: 5 Apr 21 09:57:39.755218 kernel: scsi 0:0:0:1: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Apr 21 09:57:39.759195 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 21 09:57:39.778841 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 21 09:57:39.784839 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Apr 21 09:57:39.785840 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Apr 21 09:57:39.785980 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Apr 21 09:57:39.789175 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Apr 21 09:57:39.789341 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Apr 21 09:57:39.789428 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Apr 21 09:57:39.789956 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 21 09:57:39.793841 kernel: hub 1-0:1.0: USB hub found Apr 21 09:57:39.794012 kernel: hub 1-0:1.0: 4 ports detected Apr 21 09:57:39.794097 kernel: sr 0:0:0:0: Power-on or device reset occurred Apr 21 09:57:39.795837 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Apr 21 09:57:39.798167 kernel: hub 2-0:1.0: USB hub found Apr 21 09:57:39.798354 kernel: hub 2-0:1.0: 4 ports detected Apr 21 09:57:39.798984 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 16x/50x cd/rw xa/form2 cdda tray Apr 21 09:57:39.799888 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Apr 21 09:57:39.801860 kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0 Apr 21 09:57:39.815612 kernel: sd 0:0:0:1: Power-on or device reset occurred Apr 21 09:57:39.815954 kernel: sd 0:0:0:1: [sda] 80003072 512-byte logical blocks: (41.0 GB/38.1 GiB) Apr 21 09:57:39.816751 kernel: sd 0:0:0:1: [sda] Write Protect is off Apr 21 09:57:39.816888 kernel: sd 0:0:0:1: [sda] Mode Sense: 63 00 00 08 Apr 21 09:57:39.816980 kernel: sd 0:0:0:1: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Apr 21 09:57:39.822889 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Apr 21 09:57:39.822974 kernel: GPT:17805311 != 80003071 Apr 21 09:57:39.823003 kernel: GPT:Alternate GPT header not at the end of the disk. Apr 21 09:57:39.823032 kernel: GPT:17805311 != 80003071 Apr 21 09:57:39.823057 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 21 09:57:39.823081 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 21 09:57:39.822992 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 21 09:57:39.826845 kernel: sd 0:0:0:1: [sda] Attached SCSI disk Apr 21 09:57:39.861881 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/sda6 scanned by (udev-worker) (526) Apr 21 09:57:39.874852 kernel: BTRFS: device fsid be2a029c-0ccf-4981-91f9-c6e4b4ef2fb8 devid 1 transid 32 /dev/sda3 scanned by (udev-worker) (515) Apr 21 09:57:39.880595 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Apr 21 09:57:39.886406 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Apr 21 09:57:39.892622 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Apr 21 09:57:39.897840 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Apr 21 09:57:39.900764 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Apr 21 09:57:39.916162 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 21 09:57:39.924337 disk-uuid[574]: Primary Header is updated. Apr 21 09:57:39.924337 disk-uuid[574]: Secondary Entries is updated. Apr 21 09:57:39.924337 disk-uuid[574]: Secondary Header is updated. Apr 21 09:57:39.932862 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 21 09:57:39.938848 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 21 09:57:40.035221 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Apr 21 09:57:40.170838 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input1 Apr 21 09:57:40.172410 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Apr 21 09:57:40.172607 kernel: usbcore: registered new interface driver usbhid Apr 21 09:57:40.172627 kernel: usbhid: USB HID core driver Apr 21 09:57:40.276916 kernel: usb 1-2: new high-speed USB device number 3 using xhci_hcd Apr 21 09:57:40.404870 kernel: input: QEMU QEMU USB Keyboard as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-2/1-2:1.0/0003:0627:0001.0002/input/input2 Apr 21 09:57:40.459407 kernel: hid-generic 0003:0627:0001.0002: input,hidraw1: USB HID v1.11 Keyboard [QEMU QEMU USB Keyboard] on usb-0000:02:00.0-2/input0 Apr 21 09:57:40.943932 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 21 09:57:40.944683 disk-uuid[575]: The operation has completed successfully. Apr 21 09:57:40.996993 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 21 09:57:40.997146 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 21 09:57:41.017143 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 21 09:57:41.021603 sh[590]: Success Apr 21 09:57:41.036864 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Apr 21 09:57:41.085767 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 21 09:57:41.096184 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 21 09:57:41.100873 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 21 09:57:41.125490 kernel: BTRFS info (device dm-0): first mount of filesystem be2a029c-0ccf-4981-91f9-c6e4b4ef2fb8 Apr 21 09:57:41.125566 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Apr 21 09:57:41.125585 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Apr 21 09:57:41.125614 kernel: BTRFS info (device dm-0): disabling log replay at mount time Apr 21 09:57:41.125918 kernel: BTRFS info (device dm-0): using free space tree Apr 21 09:57:41.132868 kernel: BTRFS info (device dm-0): enabling ssd optimizations Apr 21 09:57:41.135072 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 21 09:57:41.138211 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 21 09:57:41.144072 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 21 09:57:41.148083 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 21 09:57:41.158225 kernel: BTRFS info (device sda6): first mount of filesystem 271cc9ce-9bef-4147-844b-0996375babde Apr 21 09:57:41.158293 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Apr 21 09:57:41.159072 kernel: BTRFS info (device sda6): using free space tree Apr 21 09:57:41.164883 kernel: BTRFS info (device sda6): enabling ssd optimizations Apr 21 09:57:41.164940 kernel: BTRFS info (device sda6): auto enabling async discard Apr 21 09:57:41.176005 kernel: BTRFS info (device sda6): last unmount of filesystem 271cc9ce-9bef-4147-844b-0996375babde Apr 21 09:57:41.176583 systemd[1]: mnt-oem.mount: Deactivated successfully. Apr 21 09:57:41.186308 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 21 09:57:41.193040 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 21 09:57:41.293769 ignition[668]: Ignition 2.19.0 Apr 21 09:57:41.293780 ignition[668]: Stage: fetch-offline Apr 21 09:57:41.293839 ignition[668]: no configs at "/usr/lib/ignition/base.d" Apr 21 09:57:41.293847 ignition[668]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 21 09:57:41.294019 ignition[668]: parsed url from cmdline: "" Apr 21 09:57:41.294025 ignition[668]: no config URL provided Apr 21 09:57:41.294030 ignition[668]: reading system config file "/usr/lib/ignition/user.ign" Apr 21 09:57:41.294039 ignition[668]: no config at "/usr/lib/ignition/user.ign" Apr 21 09:57:41.294044 ignition[668]: failed to fetch config: resource requires networking Apr 21 09:57:41.299621 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 21 09:57:41.294432 ignition[668]: Ignition finished successfully Apr 21 09:57:41.319165 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 21 09:57:41.325126 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 21 09:57:41.358232 systemd-networkd[778]: lo: Link UP Apr 21 09:57:41.358243 systemd-networkd[778]: lo: Gained carrier Apr 21 09:57:41.359801 systemd-networkd[778]: Enumeration completed Apr 21 09:57:41.359940 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 21 09:57:41.360933 systemd[1]: Reached target network.target - Network. Apr 21 09:57:41.361940 systemd-networkd[778]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 21 09:57:41.361943 systemd-networkd[778]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 21 09:57:41.362716 systemd-networkd[778]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 21 09:57:41.362719 systemd-networkd[778]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 21 09:57:41.363198 systemd-networkd[778]: eth0: Link UP Apr 21 09:57:41.363207 systemd-networkd[778]: eth0: Gained carrier Apr 21 09:57:41.363214 systemd-networkd[778]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 21 09:57:41.368195 systemd-networkd[778]: eth1: Link UP Apr 21 09:57:41.368199 systemd-networkd[778]: eth1: Gained carrier Apr 21 09:57:41.368209 systemd-networkd[778]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 21 09:57:41.369056 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Apr 21 09:57:41.383703 ignition[780]: Ignition 2.19.0 Apr 21 09:57:41.383712 ignition[780]: Stage: fetch Apr 21 09:57:41.383936 ignition[780]: no configs at "/usr/lib/ignition/base.d" Apr 21 09:57:41.383947 ignition[780]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 21 09:57:41.384043 ignition[780]: parsed url from cmdline: "" Apr 21 09:57:41.384046 ignition[780]: no config URL provided Apr 21 09:57:41.384051 ignition[780]: reading system config file "/usr/lib/ignition/user.ign" Apr 21 09:57:41.384059 ignition[780]: no config at "/usr/lib/ignition/user.ign" Apr 21 09:57:41.384083 ignition[780]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 Apr 21 09:57:41.384851 ignition[780]: GET error: Get "http://169.254.169.254/hetzner/v1/userdata": dial tcp 169.254.169.254:80: connect: network is unreachable Apr 21 09:57:41.409932 systemd-networkd[778]: eth1: DHCPv4 address 10.0.0.3/32 acquired from 10.0.0.1 Apr 21 09:57:41.431936 systemd-networkd[778]: eth0: DHCPv4 address 178.104.221.144/32, gateway 172.31.1.1 acquired from 172.31.1.1 Apr 21 09:57:41.584927 ignition[780]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #2 Apr 21 09:57:41.590266 ignition[780]: GET result: OK Apr 21 09:57:41.590400 ignition[780]: parsing config with SHA512: c02822c0cc2ff201542c8bd1577730304bed2406c3e767fc210a0794756068de0eb48253842545c4b7a606cc79bc232df22b44978dc31893cdc03fb20a6c230d Apr 21 09:57:41.595828 unknown[780]: fetched base config from "system" Apr 21 09:57:41.595840 unknown[780]: fetched base config from "system" Apr 21 09:57:41.596400 ignition[780]: fetch: fetch complete Apr 21 09:57:41.595845 unknown[780]: fetched user config from "hetzner" Apr 21 09:57:41.596406 ignition[780]: fetch: fetch passed Apr 21 09:57:41.598383 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Apr 21 09:57:41.596458 ignition[780]: Ignition finished successfully Apr 21 09:57:41.611414 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 21 09:57:41.629509 ignition[787]: Ignition 2.19.0 Apr 21 09:57:41.629518 ignition[787]: Stage: kargs Apr 21 09:57:41.629690 ignition[787]: no configs at "/usr/lib/ignition/base.d" Apr 21 09:57:41.629699 ignition[787]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 21 09:57:41.632970 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 21 09:57:41.630811 ignition[787]: kargs: kargs passed Apr 21 09:57:41.630886 ignition[787]: Ignition finished successfully Apr 21 09:57:41.646405 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 21 09:57:41.660801 ignition[793]: Ignition 2.19.0 Apr 21 09:57:41.660812 ignition[793]: Stage: disks Apr 21 09:57:41.661025 ignition[793]: no configs at "/usr/lib/ignition/base.d" Apr 21 09:57:41.661036 ignition[793]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 21 09:57:41.662078 ignition[793]: disks: disks passed Apr 21 09:57:41.662143 ignition[793]: Ignition finished successfully Apr 21 09:57:41.665181 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 21 09:57:41.666514 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 21 09:57:41.667872 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 21 09:57:41.669253 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 21 09:57:41.670340 systemd[1]: Reached target sysinit.target - System Initialization. Apr 21 09:57:41.671365 systemd[1]: Reached target basic.target - Basic System. Apr 21 09:57:41.681161 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 21 09:57:41.702531 systemd-fsck[801]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Apr 21 09:57:41.706649 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 21 09:57:41.711993 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 21 09:57:41.766843 kernel: EXT4-fs (sda9): mounted filesystem 97544627-6598-4a50-85bf-78c13463f4bd r/w with ordered data mode. Quota mode: none. Apr 21 09:57:41.768329 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 21 09:57:41.769926 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 21 09:57:41.780046 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 21 09:57:41.785004 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 21 09:57:41.787127 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Apr 21 09:57:41.787854 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 21 09:57:41.787887 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 21 09:57:41.795392 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 21 09:57:41.799743 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by mount (809) Apr 21 09:57:41.799034 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 21 09:57:41.803983 kernel: BTRFS info (device sda6): first mount of filesystem 271cc9ce-9bef-4147-844b-0996375babde Apr 21 09:57:41.804026 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Apr 21 09:57:41.804046 kernel: BTRFS info (device sda6): using free space tree Apr 21 09:57:41.813910 kernel: BTRFS info (device sda6): enabling ssd optimizations Apr 21 09:57:41.813970 kernel: BTRFS info (device sda6): auto enabling async discard Apr 21 09:57:41.818572 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 21 09:57:41.866504 initrd-setup-root[836]: cut: /sysroot/etc/passwd: No such file or directory Apr 21 09:57:41.868444 coreos-metadata[811]: Apr 21 09:57:41.868 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 Apr 21 09:57:41.870038 coreos-metadata[811]: Apr 21 09:57:41.869 INFO Fetch successful Apr 21 09:57:41.872718 coreos-metadata[811]: Apr 21 09:57:41.871 INFO wrote hostname ci-4081-3-7-d-6a70a4c656 to /sysroot/etc/hostname Apr 21 09:57:41.874482 initrd-setup-root[843]: cut: /sysroot/etc/group: No such file or directory Apr 21 09:57:41.875933 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Apr 21 09:57:41.882426 initrd-setup-root[851]: cut: /sysroot/etc/shadow: No such file or directory Apr 21 09:57:41.887047 initrd-setup-root[858]: cut: /sysroot/etc/gshadow: No such file or directory Apr 21 09:57:41.990421 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 21 09:57:41.996992 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 21 09:57:42.001395 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 21 09:57:42.007854 kernel: BTRFS info (device sda6): last unmount of filesystem 271cc9ce-9bef-4147-844b-0996375babde Apr 21 09:57:42.029179 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 21 09:57:42.034184 ignition[926]: INFO : Ignition 2.19.0 Apr 21 09:57:42.035977 ignition[926]: INFO : Stage: mount Apr 21 09:57:42.035977 ignition[926]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 21 09:57:42.035977 ignition[926]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 21 09:57:42.038943 ignition[926]: INFO : mount: mount passed Apr 21 09:57:42.038943 ignition[926]: INFO : Ignition finished successfully Apr 21 09:57:42.040155 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 21 09:57:42.051001 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 21 09:57:42.124493 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 21 09:57:42.135140 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 21 09:57:42.144851 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (937) Apr 21 09:57:42.146198 kernel: BTRFS info (device sda6): first mount of filesystem 271cc9ce-9bef-4147-844b-0996375babde Apr 21 09:57:42.146250 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Apr 21 09:57:42.146276 kernel: BTRFS info (device sda6): using free space tree Apr 21 09:57:42.149857 kernel: BTRFS info (device sda6): enabling ssd optimizations Apr 21 09:57:42.149899 kernel: BTRFS info (device sda6): auto enabling async discard Apr 21 09:57:42.152315 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 21 09:57:42.172858 ignition[954]: INFO : Ignition 2.19.0 Apr 21 09:57:42.173787 ignition[954]: INFO : Stage: files Apr 21 09:57:42.175866 ignition[954]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 21 09:57:42.175866 ignition[954]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 21 09:57:42.175866 ignition[954]: DEBUG : files: compiled without relabeling support, skipping Apr 21 09:57:42.178059 ignition[954]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 21 09:57:42.178850 ignition[954]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 21 09:57:42.182340 ignition[954]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 21 09:57:42.183546 ignition[954]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 21 09:57:42.184932 unknown[954]: wrote ssh authorized keys file for user: core Apr 21 09:57:42.185746 ignition[954]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 21 09:57:42.189289 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Apr 21 09:57:42.192065 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Apr 21 09:57:42.267401 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Apr 21 09:57:42.357328 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Apr 21 09:57:42.357328 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Apr 21 09:57:42.359753 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Apr 21 09:57:42.359753 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 21 09:57:42.359753 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 21 09:57:42.359753 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 21 09:57:42.359753 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 21 09:57:42.359753 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 21 09:57:42.359753 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 21 09:57:42.359753 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 21 09:57:42.367708 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 21 09:57:42.367708 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-arm64.raw" Apr 21 09:57:42.367708 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-arm64.raw" Apr 21 09:57:42.367708 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-arm64.raw" Apr 21 09:57:42.367708 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.8-arm64.raw: attempt #1 Apr 21 09:57:42.716733 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Apr 21 09:57:43.061938 systemd-networkd[778]: eth0: Gained IPv6LL Apr 21 09:57:43.254027 systemd-networkd[778]: eth1: Gained IPv6LL Apr 21 09:57:43.536472 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-arm64.raw" Apr 21 09:57:43.536472 ignition[954]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Apr 21 09:57:43.539532 ignition[954]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 21 09:57:43.539532 ignition[954]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 21 09:57:43.539532 ignition[954]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Apr 21 09:57:43.539532 ignition[954]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Apr 21 09:57:43.539532 ignition[954]: INFO : files: op(d): op(e): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Apr 21 09:57:43.539532 ignition[954]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Apr 21 09:57:43.539532 ignition[954]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Apr 21 09:57:43.539532 ignition[954]: INFO : files: op(f): [started] setting preset to enabled for "prepare-helm.service" Apr 21 09:57:43.539532 ignition[954]: INFO : files: op(f): [finished] setting preset to enabled for "prepare-helm.service" Apr 21 09:57:43.539532 ignition[954]: INFO : files: createResultFile: createFiles: op(10): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 21 09:57:43.539532 ignition[954]: INFO : files: createResultFile: createFiles: op(10): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 21 09:57:43.539532 ignition[954]: INFO : files: files passed Apr 21 09:57:43.539532 ignition[954]: INFO : Ignition finished successfully Apr 21 09:57:43.540923 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 21 09:57:43.547921 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 21 09:57:43.551733 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 21 09:57:43.555570 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 21 09:57:43.555673 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 21 09:57:43.570749 initrd-setup-root-after-ignition[982]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 21 09:57:43.570749 initrd-setup-root-after-ignition[982]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 21 09:57:43.573154 initrd-setup-root-after-ignition[986]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 21 09:57:43.575206 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 21 09:57:43.576188 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 21 09:57:43.584019 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 21 09:57:43.609953 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 21 09:57:43.610703 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 21 09:57:43.612454 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 21 09:57:43.614657 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 21 09:57:43.616986 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 21 09:57:43.624995 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 21 09:57:43.642038 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 21 09:57:43.649057 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 21 09:57:43.659197 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 21 09:57:43.659921 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 21 09:57:43.661556 systemd[1]: Stopped target timers.target - Timer Units. Apr 21 09:57:43.662597 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 21 09:57:43.662719 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 21 09:57:43.664048 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 21 09:57:43.664668 systemd[1]: Stopped target basic.target - Basic System. Apr 21 09:57:43.665728 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 21 09:57:43.666845 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 21 09:57:43.667861 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 21 09:57:43.669011 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 21 09:57:43.670131 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 21 09:57:43.671346 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 21 09:57:43.672455 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 21 09:57:43.673532 systemd[1]: Stopped target swap.target - Swaps. Apr 21 09:57:43.674442 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 21 09:57:43.674574 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 21 09:57:43.675964 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 21 09:57:43.676586 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 21 09:57:43.677521 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 21 09:57:43.678004 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 21 09:57:43.678720 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 21 09:57:43.678849 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 21 09:57:43.680437 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 21 09:57:43.680556 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 21 09:57:43.682539 systemd[1]: ignition-files.service: Deactivated successfully. Apr 21 09:57:43.682633 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 21 09:57:43.683579 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Apr 21 09:57:43.683681 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Apr 21 09:57:43.690012 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 21 09:57:43.690516 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 21 09:57:43.690630 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 21 09:57:43.697027 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 21 09:57:43.697522 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 21 09:57:43.697627 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 21 09:57:43.699921 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 21 09:57:43.700012 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 21 09:57:43.709308 ignition[1006]: INFO : Ignition 2.19.0 Apr 21 09:57:43.709308 ignition[1006]: INFO : Stage: umount Apr 21 09:57:43.709308 ignition[1006]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 21 09:57:43.709308 ignition[1006]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 21 09:57:43.709215 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 21 09:57:43.715685 ignition[1006]: INFO : umount: umount passed Apr 21 09:57:43.715685 ignition[1006]: INFO : Ignition finished successfully Apr 21 09:57:43.711050 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 21 09:57:43.714867 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 21 09:57:43.714975 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 21 09:57:43.719240 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 21 09:57:43.720748 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 21 09:57:43.721138 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 21 09:57:43.722863 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 21 09:57:43.722907 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 21 09:57:43.724073 systemd[1]: ignition-fetch.service: Deactivated successfully. Apr 21 09:57:43.724117 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Apr 21 09:57:43.725761 systemd[1]: Stopped target network.target - Network. Apr 21 09:57:43.726374 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 21 09:57:43.726423 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 21 09:57:43.727068 systemd[1]: Stopped target paths.target - Path Units. Apr 21 09:57:43.728614 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 21 09:57:43.731907 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 21 09:57:43.732854 systemd[1]: Stopped target slices.target - Slice Units. Apr 21 09:57:43.734517 systemd[1]: Stopped target sockets.target - Socket Units. Apr 21 09:57:43.735583 systemd[1]: iscsid.socket: Deactivated successfully. Apr 21 09:57:43.735635 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 21 09:57:43.736905 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 21 09:57:43.736947 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 21 09:57:43.737941 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 21 09:57:43.737999 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 21 09:57:43.738960 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 21 09:57:43.739005 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 21 09:57:43.740279 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 21 09:57:43.740916 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 21 09:57:43.742200 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 21 09:57:43.742297 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 21 09:57:43.743347 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 21 09:57:43.743433 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 21 09:57:43.744512 systemd-networkd[778]: eth1: DHCPv6 lease lost Apr 21 09:57:43.751115 systemd-networkd[778]: eth0: DHCPv6 lease lost Apr 21 09:57:43.753129 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 21 09:57:43.753333 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 21 09:57:43.756693 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 21 09:57:43.757644 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 21 09:57:43.761810 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 21 09:57:43.762620 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 21 09:57:43.767921 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 21 09:57:43.768812 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 21 09:57:43.768938 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 21 09:57:43.773003 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 21 09:57:43.773083 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 21 09:57:43.774119 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 21 09:57:43.774161 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 21 09:57:43.775214 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 21 09:57:43.775266 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 21 09:57:43.776633 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 21 09:57:43.788552 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 21 09:57:43.788662 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 21 09:57:43.798017 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 21 09:57:43.798425 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 21 09:57:43.800493 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 21 09:57:43.800549 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 21 09:57:43.802466 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 21 09:57:43.802496 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 21 09:57:43.803492 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 21 09:57:43.803538 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 21 09:57:43.805096 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 21 09:57:43.805141 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 21 09:57:43.806540 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 21 09:57:43.806585 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 21 09:57:43.814981 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 21 09:57:43.815591 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 21 09:57:43.815649 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 21 09:57:43.821560 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 21 09:57:43.821665 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 21 09:57:43.824232 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 21 09:57:43.824332 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 21 09:57:43.825559 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 21 09:57:43.836215 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 21 09:57:43.850664 systemd[1]: Switching root. Apr 21 09:57:43.881874 systemd-journald[236]: Journal stopped Apr 21 09:57:44.790982 systemd-journald[236]: Received SIGTERM from PID 1 (systemd). Apr 21 09:57:44.791054 kernel: SELinux: policy capability network_peer_controls=1 Apr 21 09:57:44.791074 kernel: SELinux: policy capability open_perms=1 Apr 21 09:57:44.791084 kernel: SELinux: policy capability extended_socket_class=1 Apr 21 09:57:44.791093 kernel: SELinux: policy capability always_check_network=0 Apr 21 09:57:44.791102 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 21 09:57:44.791112 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 21 09:57:44.791121 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 21 09:57:44.791131 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 21 09:57:44.791140 kernel: audit: type=1403 audit(1776765464.011:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Apr 21 09:57:44.791152 systemd[1]: Successfully loaded SELinux policy in 37.075ms. Apr 21 09:57:44.791177 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 11.034ms. Apr 21 09:57:44.791190 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 21 09:57:44.791200 systemd[1]: Detected virtualization kvm. Apr 21 09:57:44.791211 systemd[1]: Detected architecture arm64. Apr 21 09:57:44.791221 systemd[1]: Detected first boot. Apr 21 09:57:44.791231 systemd[1]: Hostname set to . Apr 21 09:57:44.791241 systemd[1]: Initializing machine ID from VM UUID. Apr 21 09:57:44.791252 zram_generator::config[1049]: No configuration found. Apr 21 09:57:44.791265 systemd[1]: Populated /etc with preset unit settings. Apr 21 09:57:44.791275 systemd[1]: initrd-switch-root.service: Deactivated successfully. Apr 21 09:57:44.791285 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Apr 21 09:57:44.791295 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Apr 21 09:57:44.791306 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Apr 21 09:57:44.791316 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Apr 21 09:57:44.791327 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Apr 21 09:57:44.791337 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Apr 21 09:57:44.791349 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Apr 21 09:57:44.791360 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Apr 21 09:57:44.791370 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Apr 21 09:57:44.791381 systemd[1]: Created slice user.slice - User and Session Slice. Apr 21 09:57:44.791391 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 21 09:57:44.791401 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 21 09:57:44.791413 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Apr 21 09:57:44.791423 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Apr 21 09:57:44.791434 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Apr 21 09:57:44.791446 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 21 09:57:44.791456 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Apr 21 09:57:44.791470 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 21 09:57:44.791481 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Apr 21 09:57:44.791491 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Apr 21 09:57:44.791502 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Apr 21 09:57:44.791513 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Apr 21 09:57:44.791524 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 21 09:57:44.791537 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 21 09:57:44.791548 systemd[1]: Reached target slices.target - Slice Units. Apr 21 09:57:44.791558 systemd[1]: Reached target swap.target - Swaps. Apr 21 09:57:44.791568 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Apr 21 09:57:44.791578 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Apr 21 09:57:44.791589 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 21 09:57:44.791599 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 21 09:57:44.791611 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 21 09:57:44.791621 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Apr 21 09:57:44.791631 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Apr 21 09:57:44.791643 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Apr 21 09:57:44.791654 systemd[1]: Mounting media.mount - External Media Directory... Apr 21 09:57:44.791664 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Apr 21 09:57:44.791679 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Apr 21 09:57:44.791689 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Apr 21 09:57:44.791700 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 21 09:57:44.791711 systemd[1]: Reached target machines.target - Containers. Apr 21 09:57:44.791721 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Apr 21 09:57:44.791736 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 21 09:57:44.791746 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 21 09:57:44.791757 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Apr 21 09:57:44.791767 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 21 09:57:44.791780 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 21 09:57:44.791792 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 21 09:57:44.791806 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Apr 21 09:57:44.791967 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 21 09:57:44.791983 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 21 09:57:44.791994 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Apr 21 09:57:44.792005 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Apr 21 09:57:44.792019 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Apr 21 09:57:44.792030 systemd[1]: Stopped systemd-fsck-usr.service. Apr 21 09:57:44.792076 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 21 09:57:44.792090 kernel: loop: module loaded Apr 21 09:57:44.792101 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 21 09:57:44.792111 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 21 09:57:44.792122 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Apr 21 09:57:44.792132 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 21 09:57:44.792143 systemd[1]: verity-setup.service: Deactivated successfully. Apr 21 09:57:44.792153 systemd[1]: Stopped verity-setup.service. Apr 21 09:57:44.792166 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Apr 21 09:57:44.792177 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Apr 21 09:57:44.792187 systemd[1]: Mounted media.mount - External Media Directory. Apr 21 09:57:44.792197 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Apr 21 09:57:44.792207 kernel: fuse: init (API version 7.39) Apr 21 09:57:44.792217 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Apr 21 09:57:44.792228 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Apr 21 09:57:44.792240 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 21 09:57:44.792251 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 21 09:57:44.792261 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Apr 21 09:57:44.792271 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 21 09:57:44.792282 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 21 09:57:44.792293 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 21 09:57:44.792306 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 21 09:57:44.792317 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 21 09:57:44.792328 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Apr 21 09:57:44.792340 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 21 09:57:44.792351 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 21 09:57:44.792361 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 21 09:57:44.792373 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 21 09:57:44.792386 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Apr 21 09:57:44.792397 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 21 09:57:44.792407 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Apr 21 09:57:44.792417 kernel: ACPI: bus type drm_connector registered Apr 21 09:57:44.792427 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Apr 21 09:57:44.792438 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 21 09:57:44.792448 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 21 09:57:44.792461 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Apr 21 09:57:44.792503 systemd-journald[1116]: Collecting audit messages is disabled. Apr 21 09:57:44.792528 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Apr 21 09:57:44.792539 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Apr 21 09:57:44.792550 systemd-journald[1116]: Journal started Apr 21 09:57:44.792571 systemd-journald[1116]: Runtime Journal (/run/log/journal/23fbff7cc6ed4f04935f08dcb063b7e0) is 8.0M, max 76.6M, 68.6M free. Apr 21 09:57:44.477281 systemd[1]: Queued start job for default target multi-user.target. Apr 21 09:57:44.501558 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Apr 21 09:57:44.501965 systemd[1]: systemd-journald.service: Deactivated successfully. Apr 21 09:57:44.795857 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 21 09:57:44.798306 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Apr 21 09:57:44.801577 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 21 09:57:44.810851 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Apr 21 09:57:44.810931 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 21 09:57:44.823191 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 21 09:57:44.834836 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Apr 21 09:57:44.838844 systemd[1]: Started systemd-journald.service - Journal Service. Apr 21 09:57:44.839978 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Apr 21 09:57:44.842139 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 21 09:57:44.843883 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 21 09:57:44.848567 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 21 09:57:44.852659 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Apr 21 09:57:44.852832 kernel: loop0: detected capacity change from 0 to 8 Apr 21 09:57:44.853503 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Apr 21 09:57:44.857911 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Apr 21 09:57:44.859799 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Apr 21 09:57:44.867900 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Apr 21 09:57:44.892422 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Apr 21 09:57:44.894891 kernel: loop1: detected capacity change from 0 to 209336 Apr 21 09:57:44.906781 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Apr 21 09:57:44.911539 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Apr 21 09:57:44.919286 systemd[1]: Starting systemd-sysusers.service - Create System Users... Apr 21 09:57:44.924268 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Apr 21 09:57:44.927366 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 21 09:57:44.933986 systemd-journald[1116]: Time spent on flushing to /var/log/journal/23fbff7cc6ed4f04935f08dcb063b7e0 is 45.099ms for 1132 entries. Apr 21 09:57:44.933986 systemd-journald[1116]: System Journal (/var/log/journal/23fbff7cc6ed4f04935f08dcb063b7e0) is 8.0M, max 584.8M, 576.8M free. Apr 21 09:57:44.997749 systemd-journald[1116]: Received client request to flush runtime journal. Apr 21 09:57:44.997790 kernel: loop2: detected capacity change from 0 to 114328 Apr 21 09:57:44.997806 kernel: loop3: detected capacity change from 0 to 114432 Apr 21 09:57:44.969315 udevadm[1177]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Apr 21 09:57:44.992443 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 21 09:57:44.998943 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Apr 21 09:57:45.006883 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Apr 21 09:57:45.011005 systemd[1]: Finished systemd-sysusers.service - Create System Users. Apr 21 09:57:45.032323 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 21 09:57:45.059605 systemd-tmpfiles[1184]: ACLs are not supported, ignoring. Apr 21 09:57:45.060063 systemd-tmpfiles[1184]: ACLs are not supported, ignoring. Apr 21 09:57:45.063875 kernel: loop4: detected capacity change from 0 to 8 Apr 21 09:57:45.067880 kernel: loop5: detected capacity change from 0 to 209336 Apr 21 09:57:45.068373 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 21 09:57:45.080916 kernel: loop6: detected capacity change from 0 to 114328 Apr 21 09:57:45.101251 kernel: loop7: detected capacity change from 0 to 114432 Apr 21 09:57:45.117219 (sd-merge)[1187]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. Apr 21 09:57:45.118217 (sd-merge)[1187]: Merged extensions into '/usr'. Apr 21 09:57:45.127954 systemd[1]: Reloading requested from client PID 1145 ('systemd-sysext') (unit systemd-sysext.service)... Apr 21 09:57:45.127977 systemd[1]: Reloading... Apr 21 09:57:45.212843 zram_generator::config[1210]: No configuration found. Apr 21 09:57:45.358917 ldconfig[1141]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 21 09:57:45.361005 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 21 09:57:45.406958 systemd[1]: Reloading finished in 278 ms. Apr 21 09:57:45.434798 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Apr 21 09:57:45.437994 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Apr 21 09:57:45.445176 systemd[1]: Starting ensure-sysext.service... Apr 21 09:57:45.448298 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 21 09:57:45.456192 systemd[1]: Reloading requested from client PID 1251 ('systemctl') (unit ensure-sysext.service)... Apr 21 09:57:45.456309 systemd[1]: Reloading... Apr 21 09:57:45.477267 systemd-tmpfiles[1252]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 21 09:57:45.477522 systemd-tmpfiles[1252]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Apr 21 09:57:45.478202 systemd-tmpfiles[1252]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 21 09:57:45.478410 systemd-tmpfiles[1252]: ACLs are not supported, ignoring. Apr 21 09:57:45.478454 systemd-tmpfiles[1252]: ACLs are not supported, ignoring. Apr 21 09:57:45.483898 systemd-tmpfiles[1252]: Detected autofs mount point /boot during canonicalization of boot. Apr 21 09:57:45.483911 systemd-tmpfiles[1252]: Skipping /boot Apr 21 09:57:45.496461 systemd-tmpfiles[1252]: Detected autofs mount point /boot during canonicalization of boot. Apr 21 09:57:45.496477 systemd-tmpfiles[1252]: Skipping /boot Apr 21 09:57:45.538839 zram_generator::config[1278]: No configuration found. Apr 21 09:57:45.660868 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 21 09:57:45.706708 systemd[1]: Reloading finished in 250 ms. Apr 21 09:57:45.724019 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Apr 21 09:57:45.729570 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 21 09:57:45.753986 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 21 09:57:45.759021 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Apr 21 09:57:45.762879 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Apr 21 09:57:45.767474 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 21 09:57:45.771122 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 21 09:57:45.775587 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Apr 21 09:57:45.783156 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 21 09:57:45.789143 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 21 09:57:45.793058 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 21 09:57:45.800396 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 21 09:57:45.802358 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 21 09:57:45.805854 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 21 09:57:45.807097 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 21 09:57:45.810453 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Apr 21 09:57:45.814973 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 21 09:57:45.817964 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 21 09:57:45.818648 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 21 09:57:45.819597 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Apr 21 09:57:45.822878 systemd[1]: Finished ensure-sysext.service. Apr 21 09:57:45.841573 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Apr 21 09:57:45.844864 systemd[1]: Starting systemd-update-done.service - Update is Completed... Apr 21 09:57:45.847878 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Apr 21 09:57:45.854536 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 21 09:57:45.855205 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 21 09:57:45.861091 systemd-udevd[1328]: Using default interface naming scheme 'v255'. Apr 21 09:57:45.861537 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 21 09:57:45.861699 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 21 09:57:45.864258 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 21 09:57:45.867917 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 21 09:57:45.868397 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 21 09:57:45.870376 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 21 09:57:45.874159 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 21 09:57:45.875587 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 21 09:57:45.892673 systemd[1]: Finished systemd-update-done.service - Update is Completed. Apr 21 09:57:45.894312 augenrules[1354]: No rules Apr 21 09:57:45.895113 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 21 09:57:45.906625 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 21 09:57:45.921038 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 21 09:57:45.921873 systemd[1]: Started systemd-userdbd.service - User Database Manager. Apr 21 09:57:45.924248 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Apr 21 09:57:45.926107 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 21 09:57:46.018646 systemd-networkd[1366]: lo: Link UP Apr 21 09:57:46.018987 systemd-networkd[1366]: lo: Gained carrier Apr 21 09:57:46.019905 systemd-networkd[1366]: Enumeration completed Apr 21 09:57:46.020114 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 21 09:57:46.028288 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Apr 21 09:57:46.062968 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Apr 21 09:57:46.063711 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Apr 21 09:57:46.063742 systemd[1]: Reached target time-set.target - System Time Set. Apr 21 09:57:46.069177 systemd-resolved[1327]: Positive Trust Anchors: Apr 21 09:57:46.069196 systemd-resolved[1327]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 21 09:57:46.069229 systemd-resolved[1327]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 21 09:57:46.075594 systemd-resolved[1327]: Using system hostname 'ci-4081-3-7-d-6a70a4c656'. Apr 21 09:57:46.078292 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 21 09:57:46.080761 systemd[1]: Reached target network.target - Network. Apr 21 09:57:46.082898 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 21 09:57:46.150666 systemd-networkd[1366]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 21 09:57:46.151454 systemd-networkd[1366]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 21 09:57:46.151927 kernel: mousedev: PS/2 mouse device common for all mice Apr 21 09:57:46.152329 systemd-networkd[1366]: eth1: Link UP Apr 21 09:57:46.152708 systemd-networkd[1366]: eth1: Gained carrier Apr 21 09:57:46.152766 systemd-networkd[1366]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 21 09:57:46.174620 systemd[1]: Condition check resulted in dev-virtio\x2dports-org.qemu.guest_agent.0.device - /dev/virtio-ports/org.qemu.guest_agent.0 being skipped. Apr 21 09:57:46.174965 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 21 09:57:46.182106 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 21 09:57:46.186856 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 21 09:57:46.191158 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 21 09:57:46.192775 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 21 09:57:46.192886 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 21 09:57:46.193323 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 21 09:57:46.194050 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 21 09:57:46.195373 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 21 09:57:46.197596 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 21 09:57:46.199557 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 21 09:57:46.200156 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 21 09:57:46.203865 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 21 09:57:46.206597 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 21 09:57:46.209270 systemd-networkd[1366]: eth1: DHCPv4 address 10.0.0.3/32 acquired from 10.0.0.1 Apr 21 09:57:46.210653 systemd-timesyncd[1338]: Network configuration changed, trying to establish connection. Apr 21 09:57:46.210725 systemd-networkd[1366]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 21 09:57:46.210729 systemd-networkd[1366]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 21 09:57:46.211279 systemd-networkd[1366]: eth0: Link UP Apr 21 09:57:46.211282 systemd-networkd[1366]: eth0: Gained carrier Apr 21 09:57:46.211296 systemd-networkd[1366]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 21 09:57:46.237399 kernel: [drm] pci: virtio-gpu-pci detected at 0000:00:01.0 Apr 21 09:57:46.237458 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Apr 21 09:57:46.237471 kernel: [drm] features: -context_init Apr 21 09:57:46.239844 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 32 scanned by (udev-worker) (1376) Apr 21 09:57:46.267218 kernel: [drm] number of scanouts: 1 Apr 21 09:57:46.267275 kernel: [drm] number of cap sets: 0 Apr 21 09:57:46.270410 systemd-networkd[1366]: eth0: DHCPv4 address 178.104.221.144/32, gateway 172.31.1.1 acquired from 172.31.1.1 Apr 21 09:57:46.271880 systemd-timesyncd[1338]: Network configuration changed, trying to establish connection. Apr 21 09:57:46.277843 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:01.0 on minor 0 Apr 21 09:57:46.292435 kernel: Console: switching to colour frame buffer device 160x50 Apr 21 09:57:46.292572 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 21 09:57:46.301867 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Apr 21 09:57:46.319662 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Apr 21 09:57:46.321390 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 21 09:57:46.322869 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 21 09:57:46.330047 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Apr 21 09:57:46.332735 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 21 09:57:46.348884 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Apr 21 09:57:46.399873 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 21 09:57:46.428405 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Apr 21 09:57:46.435143 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Apr 21 09:57:46.452839 lvm[1435]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 21 09:57:46.481876 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Apr 21 09:57:46.483151 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 21 09:57:46.483902 systemd[1]: Reached target sysinit.target - System Initialization. Apr 21 09:57:46.484736 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Apr 21 09:57:46.485597 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Apr 21 09:57:46.486660 systemd[1]: Started logrotate.timer - Daily rotation of log files. Apr 21 09:57:46.487602 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Apr 21 09:57:46.488454 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Apr 21 09:57:46.489244 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 21 09:57:46.489351 systemd[1]: Reached target paths.target - Path Units. Apr 21 09:57:46.489897 systemd[1]: Reached target timers.target - Timer Units. Apr 21 09:57:46.492153 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Apr 21 09:57:46.495945 systemd[1]: Starting docker.socket - Docker Socket for the API... Apr 21 09:57:46.503214 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Apr 21 09:57:46.506401 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Apr 21 09:57:46.508138 systemd[1]: Listening on docker.socket - Docker Socket for the API. Apr 21 09:57:46.509016 systemd[1]: Reached target sockets.target - Socket Units. Apr 21 09:57:46.509720 systemd[1]: Reached target basic.target - Basic System. Apr 21 09:57:46.510482 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Apr 21 09:57:46.510510 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Apr 21 09:57:46.515140 systemd[1]: Starting containerd.service - containerd container runtime... Apr 21 09:57:46.519098 lvm[1439]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 21 09:57:46.520909 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Apr 21 09:57:46.526064 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Apr 21 09:57:46.539050 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Apr 21 09:57:46.543219 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Apr 21 09:57:46.544507 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Apr 21 09:57:46.546101 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Apr 21 09:57:46.549277 jq[1443]: false Apr 21 09:57:46.549538 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Apr 21 09:57:46.552301 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. Apr 21 09:57:46.558013 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Apr 21 09:57:46.568124 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Apr 21 09:57:46.574058 systemd[1]: Starting systemd-logind.service - User Login Management... Apr 21 09:57:46.576312 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Apr 21 09:57:46.577415 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Apr 21 09:57:46.584339 systemd[1]: Starting update-engine.service - Update Engine... Apr 21 09:57:46.592934 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Apr 21 09:57:46.596318 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Apr 21 09:57:46.598148 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 21 09:57:46.598517 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Apr 21 09:57:46.614255 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 21 09:57:46.614726 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Apr 21 09:57:46.618483 dbus-daemon[1442]: [system] SELinux support is enabled Apr 21 09:57:46.621368 systemd[1]: Started dbus.service - D-Bus System Message Bus. Apr 21 09:57:46.623245 extend-filesystems[1446]: Found loop4 Apr 21 09:57:46.623245 extend-filesystems[1446]: Found loop5 Apr 21 09:57:46.623245 extend-filesystems[1446]: Found loop6 Apr 21 09:57:46.623245 extend-filesystems[1446]: Found loop7 Apr 21 09:57:46.623245 extend-filesystems[1446]: Found sda Apr 21 09:57:46.623245 extend-filesystems[1446]: Found sda1 Apr 21 09:57:46.623245 extend-filesystems[1446]: Found sda2 Apr 21 09:57:46.623245 extend-filesystems[1446]: Found sda3 Apr 21 09:57:46.623245 extend-filesystems[1446]: Found usr Apr 21 09:57:46.623245 extend-filesystems[1446]: Found sda4 Apr 21 09:57:46.623245 extend-filesystems[1446]: Found sda6 Apr 21 09:57:46.623245 extend-filesystems[1446]: Found sda7 Apr 21 09:57:46.623245 extend-filesystems[1446]: Found sda9 Apr 21 09:57:46.623245 extend-filesystems[1446]: Checking size of /dev/sda9 Apr 21 09:57:46.651356 coreos-metadata[1441]: Apr 21 09:57:46.647 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 Apr 21 09:57:46.651675 jq[1455]: true Apr 21 09:57:46.624892 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 21 09:57:46.624918 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Apr 21 09:57:46.652212 coreos-metadata[1441]: Apr 21 09:57:46.652 INFO Fetch successful Apr 21 09:57:46.652212 coreos-metadata[1441]: Apr 21 09:57:46.652 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 Apr 21 09:57:46.627935 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 21 09:57:46.627953 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Apr 21 09:57:46.649629 systemd[1]: motdgen.service: Deactivated successfully. Apr 21 09:57:46.649845 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Apr 21 09:57:46.661489 coreos-metadata[1441]: Apr 21 09:57:46.661 INFO Fetch successful Apr 21 09:57:46.668847 extend-filesystems[1446]: Resized partition /dev/sda9 Apr 21 09:57:46.671082 update_engine[1453]: I20260421 09:57:46.670784 1453 main.cc:92] Flatcar Update Engine starting Apr 21 09:57:46.676205 extend-filesystems[1484]: resize2fs 1.47.1 (20-May-2024) Apr 21 09:57:46.686810 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 9393147 blocks Apr 21 09:57:46.676286 (ntainerd)[1482]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Apr 21 09:57:46.688339 update_engine[1453]: I20260421 09:57:46.682998 1453 update_check_scheduler.cc:74] Next update check in 7m40s Apr 21 09:57:46.680080 systemd[1]: Started update-engine.service - Update Engine. Apr 21 09:57:46.691982 systemd[1]: Started locksmithd.service - Cluster reboot manager. Apr 21 09:57:46.699961 tar[1462]: linux-arm64/LICENSE Apr 21 09:57:46.699961 tar[1462]: linux-arm64/helm Apr 21 09:57:46.700231 jq[1476]: true Apr 21 09:57:46.778497 systemd-logind[1452]: New seat seat0. Apr 21 09:57:46.785976 systemd-logind[1452]: Watching system buttons on /dev/input/event0 (Power Button) Apr 21 09:57:46.786009 systemd-logind[1452]: Watching system buttons on /dev/input/event2 (QEMU QEMU USB Keyboard) Apr 21 09:57:46.786585 systemd[1]: Started systemd-logind.service - User Login Management. Apr 21 09:57:46.800295 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Apr 21 09:57:46.802226 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Apr 21 09:57:46.817976 kernel: EXT4-fs (sda9): resized filesystem to 9393147 Apr 21 09:57:46.829495 extend-filesystems[1484]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Apr 21 09:57:46.829495 extend-filesystems[1484]: old_desc_blocks = 1, new_desc_blocks = 5 Apr 21 09:57:46.829495 extend-filesystems[1484]: The filesystem on /dev/sda9 is now 9393147 (4k) blocks long. Apr 21 09:57:46.846077 extend-filesystems[1446]: Resized filesystem in /dev/sda9 Apr 21 09:57:46.846077 extend-filesystems[1446]: Found sr0 Apr 21 09:57:46.831269 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 21 09:57:46.831601 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Apr 21 09:57:46.853640 bash[1515]: Updated "/home/core/.ssh/authorized_keys" Apr 21 09:57:46.857616 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Apr 21 09:57:46.878946 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 32 scanned by (udev-worker) (1380) Apr 21 09:57:46.881362 systemd[1]: Starting sshkeys.service... Apr 21 09:57:46.909750 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Apr 21 09:57:46.927293 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Apr 21 09:57:46.989456 coreos-metadata[1522]: Apr 21 09:57:46.989 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 Apr 21 09:57:46.993786 coreos-metadata[1522]: Apr 21 09:57:46.993 INFO Fetch successful Apr 21 09:57:47.000216 unknown[1522]: wrote ssh authorized keys file for user: core Apr 21 09:57:47.011778 containerd[1482]: time="2026-04-21T09:57:47.011674680Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Apr 21 09:57:47.043205 update-ssh-keys[1530]: Updated "/home/core/.ssh/authorized_keys" Apr 21 09:57:47.044407 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Apr 21 09:57:47.048157 systemd[1]: Finished sshkeys.service. Apr 21 09:57:47.050323 containerd[1482]: time="2026-04-21T09:57:47.050283240Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Apr 21 09:57:47.053714 containerd[1482]: time="2026-04-21T09:57:47.053676160Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Apr 21 09:57:47.053850 containerd[1482]: time="2026-04-21T09:57:47.053787920Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Apr 21 09:57:47.054835 containerd[1482]: time="2026-04-21T09:57:47.053813120Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Apr 21 09:57:47.054835 containerd[1482]: time="2026-04-21T09:57:47.054156600Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Apr 21 09:57:47.054835 containerd[1482]: time="2026-04-21T09:57:47.054177120Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Apr 21 09:57:47.054835 containerd[1482]: time="2026-04-21T09:57:47.054240640Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Apr 21 09:57:47.054835 containerd[1482]: time="2026-04-21T09:57:47.054256440Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Apr 21 09:57:47.055540 containerd[1482]: time="2026-04-21T09:57:47.055515520Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 21 09:57:47.055608 containerd[1482]: time="2026-04-21T09:57:47.055594720Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Apr 21 09:57:47.055665 containerd[1482]: time="2026-04-21T09:57:47.055649520Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Apr 21 09:57:47.055869 containerd[1482]: time="2026-04-21T09:57:47.055850160Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Apr 21 09:57:47.056105 containerd[1482]: time="2026-04-21T09:57:47.056080920Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Apr 21 09:57:47.057138 containerd[1482]: time="2026-04-21T09:57:47.057112320Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Apr 21 09:57:47.059277 containerd[1482]: time="2026-04-21T09:57:47.058938400Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 21 09:57:47.059277 containerd[1482]: time="2026-04-21T09:57:47.058960520Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Apr 21 09:57:47.059277 containerd[1482]: time="2026-04-21T09:57:47.059063560Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Apr 21 09:57:47.059277 containerd[1482]: time="2026-04-21T09:57:47.059107120Z" level=info msg="metadata content store policy set" policy=shared Apr 21 09:57:47.065087 containerd[1482]: time="2026-04-21T09:57:47.065037720Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Apr 21 09:57:47.065197 containerd[1482]: time="2026-04-21T09:57:47.065180960Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Apr 21 09:57:47.065313 containerd[1482]: time="2026-04-21T09:57:47.065292800Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Apr 21 09:57:47.065563 containerd[1482]: time="2026-04-21T09:57:47.065364880Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Apr 21 09:57:47.065563 containerd[1482]: time="2026-04-21T09:57:47.065385880Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Apr 21 09:57:47.065563 containerd[1482]: time="2026-04-21T09:57:47.065518720Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Apr 21 09:57:47.068658 containerd[1482]: time="2026-04-21T09:57:47.067890080Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Apr 21 09:57:47.068658 containerd[1482]: time="2026-04-21T09:57:47.068080760Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Apr 21 09:57:47.068658 containerd[1482]: time="2026-04-21T09:57:47.068101080Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Apr 21 09:57:47.068658 containerd[1482]: time="2026-04-21T09:57:47.068118000Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Apr 21 09:57:47.068658 containerd[1482]: time="2026-04-21T09:57:47.068131880Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Apr 21 09:57:47.068658 containerd[1482]: time="2026-04-21T09:57:47.068145320Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Apr 21 09:57:47.068658 containerd[1482]: time="2026-04-21T09:57:47.068160240Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Apr 21 09:57:47.068658 containerd[1482]: time="2026-04-21T09:57:47.068175160Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Apr 21 09:57:47.068658 containerd[1482]: time="2026-04-21T09:57:47.068189160Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Apr 21 09:57:47.068658 containerd[1482]: time="2026-04-21T09:57:47.068202160Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Apr 21 09:57:47.068658 containerd[1482]: time="2026-04-21T09:57:47.068214600Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Apr 21 09:57:47.068658 containerd[1482]: time="2026-04-21T09:57:47.068227600Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Apr 21 09:57:47.068658 containerd[1482]: time="2026-04-21T09:57:47.068250080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Apr 21 09:57:47.068658 containerd[1482]: time="2026-04-21T09:57:47.068264120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Apr 21 09:57:47.068965 containerd[1482]: time="2026-04-21T09:57:47.068276600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Apr 21 09:57:47.068965 containerd[1482]: time="2026-04-21T09:57:47.068289440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Apr 21 09:57:47.068965 containerd[1482]: time="2026-04-21T09:57:47.068301280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Apr 21 09:57:47.068965 containerd[1482]: time="2026-04-21T09:57:47.068313640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Apr 21 09:57:47.068965 containerd[1482]: time="2026-04-21T09:57:47.068329160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Apr 21 09:57:47.068965 containerd[1482]: time="2026-04-21T09:57:47.068342000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Apr 21 09:57:47.068965 containerd[1482]: time="2026-04-21T09:57:47.068360760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Apr 21 09:57:47.068965 containerd[1482]: time="2026-04-21T09:57:47.068374760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Apr 21 09:57:47.068965 containerd[1482]: time="2026-04-21T09:57:47.068386880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Apr 21 09:57:47.068965 containerd[1482]: time="2026-04-21T09:57:47.068399280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Apr 21 09:57:47.068965 containerd[1482]: time="2026-04-21T09:57:47.068411040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Apr 21 09:57:47.068965 containerd[1482]: time="2026-04-21T09:57:47.068428840Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Apr 21 09:57:47.068965 containerd[1482]: time="2026-04-21T09:57:47.068450920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Apr 21 09:57:47.068965 containerd[1482]: time="2026-04-21T09:57:47.068463280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Apr 21 09:57:47.068965 containerd[1482]: time="2026-04-21T09:57:47.068473880Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Apr 21 09:57:47.069259 containerd[1482]: time="2026-04-21T09:57:47.068594920Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Apr 21 09:57:47.069259 containerd[1482]: time="2026-04-21T09:57:47.068612640Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Apr 21 09:57:47.069259 containerd[1482]: time="2026-04-21T09:57:47.068624160Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Apr 21 09:57:47.070467 containerd[1482]: time="2026-04-21T09:57:47.068635520Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Apr 21 09:57:47.070467 containerd[1482]: time="2026-04-21T09:57:47.069479560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Apr 21 09:57:47.070467 containerd[1482]: time="2026-04-21T09:57:47.069509560Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Apr 21 09:57:47.070467 containerd[1482]: time="2026-04-21T09:57:47.069519880Z" level=info msg="NRI interface is disabled by configuration." Apr 21 09:57:47.070467 containerd[1482]: time="2026-04-21T09:57:47.069530800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Apr 21 09:57:47.070610 containerd[1482]: time="2026-04-21T09:57:47.069881080Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Apr 21 09:57:47.070610 containerd[1482]: time="2026-04-21T09:57:47.069939680Z" level=info msg="Connect containerd service" Apr 21 09:57:47.070610 containerd[1482]: time="2026-04-21T09:57:47.069966880Z" level=info msg="using legacy CRI server" Apr 21 09:57:47.070610 containerd[1482]: time="2026-04-21T09:57:47.069974200Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Apr 21 09:57:47.070610 containerd[1482]: time="2026-04-21T09:57:47.070110080Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Apr 21 09:57:47.073842 containerd[1482]: time="2026-04-21T09:57:47.073214400Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 21 09:57:47.073842 containerd[1482]: time="2026-04-21T09:57:47.073413760Z" level=info msg="Start subscribing containerd event" Apr 21 09:57:47.073842 containerd[1482]: time="2026-04-21T09:57:47.073464680Z" level=info msg="Start recovering state" Apr 21 09:57:47.073842 containerd[1482]: time="2026-04-21T09:57:47.073526320Z" level=info msg="Start event monitor" Apr 21 09:57:47.073842 containerd[1482]: time="2026-04-21T09:57:47.073536840Z" level=info msg="Start snapshots syncer" Apr 21 09:57:47.073842 containerd[1482]: time="2026-04-21T09:57:47.073544960Z" level=info msg="Start cni network conf syncer for default" Apr 21 09:57:47.073842 containerd[1482]: time="2026-04-21T09:57:47.073552560Z" level=info msg="Start streaming server" Apr 21 09:57:47.075158 containerd[1482]: time="2026-04-21T09:57:47.075135520Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 21 09:57:47.077825 containerd[1482]: time="2026-04-21T09:57:47.075296720Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 21 09:57:47.077954 containerd[1482]: time="2026-04-21T09:57:47.077937400Z" level=info msg="containerd successfully booted in 0.067322s" Apr 21 09:57:47.078228 systemd[1]: Started containerd.service - containerd container runtime. Apr 21 09:57:47.089964 locksmithd[1487]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 21 09:57:47.380329 tar[1462]: linux-arm64/README.md Apr 21 09:57:47.398046 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Apr 21 09:57:47.621000 sshd_keygen[1470]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 21 09:57:47.646199 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Apr 21 09:57:47.657300 systemd[1]: Starting issuegen.service - Generate /run/issue... Apr 21 09:57:47.666368 systemd[1]: issuegen.service: Deactivated successfully. Apr 21 09:57:47.666598 systemd[1]: Finished issuegen.service - Generate /run/issue. Apr 21 09:57:47.673155 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Apr 21 09:57:47.685667 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Apr 21 09:57:47.692206 systemd[1]: Started getty@tty1.service - Getty on tty1. Apr 21 09:57:47.705461 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Apr 21 09:57:47.708681 systemd[1]: Reached target getty.target - Login Prompts. Apr 21 09:57:48.118058 systemd-networkd[1366]: eth1: Gained IPv6LL Apr 21 09:57:48.119120 systemd-timesyncd[1338]: Network configuration changed, trying to establish connection. Apr 21 09:57:48.125735 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Apr 21 09:57:48.128072 systemd[1]: Reached target network-online.target - Network is Online. Apr 21 09:57:48.144768 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 21 09:57:48.148561 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Apr 21 09:57:48.173679 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Apr 21 09:57:48.182090 systemd-networkd[1366]: eth0: Gained IPv6LL Apr 21 09:57:48.182604 systemd-timesyncd[1338]: Network configuration changed, trying to establish connection. Apr 21 09:57:48.866195 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 21 09:57:48.868634 systemd[1]: Reached target multi-user.target - Multi-User System. Apr 21 09:57:48.869090 (kubelet)[1573]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 21 09:57:48.873011 systemd[1]: Startup finished in 746ms (kernel) + 5.318s (initrd) + 4.897s (userspace) = 10.963s. Apr 21 09:57:49.371179 kubelet[1573]: E0421 09:57:49.371118 1573 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 21 09:57:49.374191 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 21 09:57:49.374347 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 21 09:57:51.658785 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Apr 21 09:57:51.667387 systemd[1]: Started sshd@0-178.104.221.144:22-50.85.169.122:35112.service - OpenSSH per-connection server daemon (50.85.169.122:35112). Apr 21 09:57:51.789164 sshd[1585]: Accepted publickey for core from 50.85.169.122 port 35112 ssh2: RSA SHA256:H2GDHYMb+1VDhh8fYRULGIeGI6zEpuvWNbrKKWv7l+g Apr 21 09:57:51.791127 sshd[1585]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 09:57:51.802789 systemd-logind[1452]: New session 1 of user core. Apr 21 09:57:51.804901 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Apr 21 09:57:51.813095 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Apr 21 09:57:51.825745 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Apr 21 09:57:51.838407 systemd[1]: Starting user@500.service - User Manager for UID 500... Apr 21 09:57:51.842290 (systemd)[1589]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Apr 21 09:57:51.946332 systemd[1589]: Queued start job for default target default.target. Apr 21 09:57:51.957883 systemd[1589]: Created slice app.slice - User Application Slice. Apr 21 09:57:51.957972 systemd[1589]: Reached target paths.target - Paths. Apr 21 09:57:51.958013 systemd[1589]: Reached target timers.target - Timers. Apr 21 09:57:51.960248 systemd[1589]: Starting dbus.socket - D-Bus User Message Bus Socket... Apr 21 09:57:51.975739 systemd[1589]: Listening on dbus.socket - D-Bus User Message Bus Socket. Apr 21 09:57:51.975885 systemd[1589]: Reached target sockets.target - Sockets. Apr 21 09:57:51.975901 systemd[1589]: Reached target basic.target - Basic System. Apr 21 09:57:51.976081 systemd[1]: Started user@500.service - User Manager for UID 500. Apr 21 09:57:51.976271 systemd[1589]: Reached target default.target - Main User Target. Apr 21 09:57:51.976378 systemd[1589]: Startup finished in 127ms. Apr 21 09:57:51.991171 systemd[1]: Started session-1.scope - Session 1 of User core. Apr 21 09:57:52.114405 systemd[1]: Started sshd@1-178.104.221.144:22-50.85.169.122:35116.service - OpenSSH per-connection server daemon (50.85.169.122:35116). Apr 21 09:57:52.231633 sshd[1600]: Accepted publickey for core from 50.85.169.122 port 35116 ssh2: RSA SHA256:H2GDHYMb+1VDhh8fYRULGIeGI6zEpuvWNbrKKWv7l+g Apr 21 09:57:52.234187 sshd[1600]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 09:57:52.239516 systemd-logind[1452]: New session 2 of user core. Apr 21 09:57:52.246138 systemd[1]: Started session-2.scope - Session 2 of User core. Apr 21 09:57:52.347949 sshd[1600]: pam_unix(sshd:session): session closed for user core Apr 21 09:57:52.352956 systemd[1]: sshd@1-178.104.221.144:22-50.85.169.122:35116.service: Deactivated successfully. Apr 21 09:57:52.355334 systemd[1]: session-2.scope: Deactivated successfully. Apr 21 09:57:52.357685 systemd-logind[1452]: Session 2 logged out. Waiting for processes to exit. Apr 21 09:57:52.358799 systemd-logind[1452]: Removed session 2. Apr 21 09:57:52.385347 systemd[1]: Started sshd@2-178.104.221.144:22-50.85.169.122:35128.service - OpenSSH per-connection server daemon (50.85.169.122:35128). Apr 21 09:57:52.505573 sshd[1607]: Accepted publickey for core from 50.85.169.122 port 35128 ssh2: RSA SHA256:H2GDHYMb+1VDhh8fYRULGIeGI6zEpuvWNbrKKWv7l+g Apr 21 09:57:52.507730 sshd[1607]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 09:57:52.513216 systemd-logind[1452]: New session 3 of user core. Apr 21 09:57:52.521168 systemd[1]: Started session-3.scope - Session 3 of User core. Apr 21 09:57:52.620222 sshd[1607]: pam_unix(sshd:session): session closed for user core Apr 21 09:57:52.626041 systemd-logind[1452]: Session 3 logged out. Waiting for processes to exit. Apr 21 09:57:52.626225 systemd[1]: sshd@2-178.104.221.144:22-50.85.169.122:35128.service: Deactivated successfully. Apr 21 09:57:52.628441 systemd[1]: session-3.scope: Deactivated successfully. Apr 21 09:57:52.629786 systemd-logind[1452]: Removed session 3. Apr 21 09:57:52.651214 systemd[1]: Started sshd@3-178.104.221.144:22-50.85.169.122:35130.service - OpenSSH per-connection server daemon (50.85.169.122:35130). Apr 21 09:57:52.774021 sshd[1614]: Accepted publickey for core from 50.85.169.122 port 35130 ssh2: RSA SHA256:H2GDHYMb+1VDhh8fYRULGIeGI6zEpuvWNbrKKWv7l+g Apr 21 09:57:52.776476 sshd[1614]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 09:57:52.782353 systemd-logind[1452]: New session 4 of user core. Apr 21 09:57:52.791327 systemd[1]: Started session-4.scope - Session 4 of User core. Apr 21 09:57:52.892608 sshd[1614]: pam_unix(sshd:session): session closed for user core Apr 21 09:57:52.897037 systemd[1]: sshd@3-178.104.221.144:22-50.85.169.122:35130.service: Deactivated successfully. Apr 21 09:57:52.899289 systemd[1]: session-4.scope: Deactivated successfully. Apr 21 09:57:52.900357 systemd-logind[1452]: Session 4 logged out. Waiting for processes to exit. Apr 21 09:57:52.901549 systemd-logind[1452]: Removed session 4. Apr 21 09:57:52.920260 systemd[1]: Started sshd@4-178.104.221.144:22-50.85.169.122:35142.service - OpenSSH per-connection server daemon (50.85.169.122:35142). Apr 21 09:57:53.041092 sshd[1621]: Accepted publickey for core from 50.85.169.122 port 35142 ssh2: RSA SHA256:H2GDHYMb+1VDhh8fYRULGIeGI6zEpuvWNbrKKWv7l+g Apr 21 09:57:53.043466 sshd[1621]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 09:57:53.049092 systemd-logind[1452]: New session 5 of user core. Apr 21 09:57:53.056148 systemd[1]: Started session-5.scope - Session 5 of User core. Apr 21 09:57:53.152189 sudo[1624]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Apr 21 09:57:53.152496 sudo[1624]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 21 09:57:53.167406 sudo[1624]: pam_unix(sudo:session): session closed for user root Apr 21 09:57:53.185192 sshd[1621]: pam_unix(sshd:session): session closed for user core Apr 21 09:57:53.190161 systemd[1]: sshd@4-178.104.221.144:22-50.85.169.122:35142.service: Deactivated successfully. Apr 21 09:57:53.193028 systemd[1]: session-5.scope: Deactivated successfully. Apr 21 09:57:53.194054 systemd-logind[1452]: Session 5 logged out. Waiting for processes to exit. Apr 21 09:57:53.196243 systemd-logind[1452]: Removed session 5. Apr 21 09:57:53.214421 systemd[1]: Started sshd@5-178.104.221.144:22-50.85.169.122:35152.service - OpenSSH per-connection server daemon (50.85.169.122:35152). Apr 21 09:57:53.333700 sshd[1629]: Accepted publickey for core from 50.85.169.122 port 35152 ssh2: RSA SHA256:H2GDHYMb+1VDhh8fYRULGIeGI6zEpuvWNbrKKWv7l+g Apr 21 09:57:53.337562 sshd[1629]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 09:57:53.344264 systemd-logind[1452]: New session 6 of user core. Apr 21 09:57:53.349301 systemd[1]: Started session-6.scope - Session 6 of User core. Apr 21 09:57:53.436374 sudo[1633]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Apr 21 09:57:53.437100 sudo[1633]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 21 09:57:53.441537 sudo[1633]: pam_unix(sudo:session): session closed for user root Apr 21 09:57:53.447582 sudo[1632]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Apr 21 09:57:53.448245 sudo[1632]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 21 09:57:53.470372 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Apr 21 09:57:53.473766 auditctl[1636]: No rules Apr 21 09:57:53.473223 systemd[1]: audit-rules.service: Deactivated successfully. Apr 21 09:57:53.473423 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Apr 21 09:57:53.476679 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 21 09:57:53.515991 augenrules[1654]: No rules Apr 21 09:57:53.517689 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 21 09:57:53.519391 sudo[1632]: pam_unix(sudo:session): session closed for user root Apr 21 09:57:53.538164 sshd[1629]: pam_unix(sshd:session): session closed for user core Apr 21 09:57:53.542956 systemd-logind[1452]: Session 6 logged out. Waiting for processes to exit. Apr 21 09:57:53.543358 systemd[1]: sshd@5-178.104.221.144:22-50.85.169.122:35152.service: Deactivated successfully. Apr 21 09:57:53.546677 systemd[1]: session-6.scope: Deactivated successfully. Apr 21 09:57:53.550236 systemd-logind[1452]: Removed session 6. Apr 21 09:57:53.568326 systemd[1]: Started sshd@6-178.104.221.144:22-50.85.169.122:35168.service - OpenSSH per-connection server daemon (50.85.169.122:35168). Apr 21 09:57:53.688660 sshd[1662]: Accepted publickey for core from 50.85.169.122 port 35168 ssh2: RSA SHA256:H2GDHYMb+1VDhh8fYRULGIeGI6zEpuvWNbrKKWv7l+g Apr 21 09:57:53.690551 sshd[1662]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 09:57:53.696212 systemd-logind[1452]: New session 7 of user core. Apr 21 09:57:53.701072 systemd[1]: Started session-7.scope - Session 7 of User core. Apr 21 09:57:53.786627 sudo[1665]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 21 09:57:53.787031 sudo[1665]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 21 09:57:54.085195 systemd[1]: Starting docker.service - Docker Application Container Engine... Apr 21 09:57:54.087039 (dockerd)[1680]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Apr 21 09:57:54.337481 dockerd[1680]: time="2026-04-21T09:57:54.336800160Z" level=info msg="Starting up" Apr 21 09:57:54.409555 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport4263058599-merged.mount: Deactivated successfully. Apr 21 09:57:54.427019 dockerd[1680]: time="2026-04-21T09:57:54.426951240Z" level=info msg="Loading containers: start." Apr 21 09:57:54.540021 kernel: Initializing XFRM netlink socket Apr 21 09:57:54.560632 systemd-timesyncd[1338]: Network configuration changed, trying to establish connection. Apr 21 09:57:54.563442 systemd-timesyncd[1338]: Network configuration changed, trying to establish connection. Apr 21 09:57:54.571338 systemd-timesyncd[1338]: Network configuration changed, trying to establish connection. Apr 21 09:57:54.624129 systemd-networkd[1366]: docker0: Link UP Apr 21 09:57:54.624757 systemd-timesyncd[1338]: Network configuration changed, trying to establish connection. Apr 21 09:57:54.638725 dockerd[1680]: time="2026-04-21T09:57:54.638674040Z" level=info msg="Loading containers: done." Apr 21 09:57:54.656683 dockerd[1680]: time="2026-04-21T09:57:54.656630960Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 21 09:57:54.657075 dockerd[1680]: time="2026-04-21T09:57:54.657052920Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Apr 21 09:57:54.657278 dockerd[1680]: time="2026-04-21T09:57:54.657257320Z" level=info msg="Daemon has completed initialization" Apr 21 09:57:54.705719 dockerd[1680]: time="2026-04-21T09:57:54.705504800Z" level=info msg="API listen on /run/docker.sock" Apr 21 09:57:54.706146 systemd[1]: Started docker.service - Docker Application Container Engine. Apr 21 09:57:55.194740 containerd[1482]: time="2026-04-21T09:57:55.194692880Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.11\"" Apr 21 09:57:55.407042 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2402669436-merged.mount: Deactivated successfully. Apr 21 09:57:55.756428 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3695567185.mount: Deactivated successfully. Apr 21 09:57:56.635992 containerd[1482]: time="2026-04-21T09:57:56.635899960Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 09:57:56.638369 containerd[1482]: time="2026-04-21T09:57:56.637980320Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.11: active requests=0, bytes read=27008885" Apr 21 09:57:56.640892 containerd[1482]: time="2026-04-21T09:57:56.639691800Z" level=info msg="ImageCreate event name:\"sha256:51b83c5cb2f791f72696c040be904535bad3c81a6ffc19a55013ac150a24d9b0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 09:57:56.643547 containerd[1482]: time="2026-04-21T09:57:56.643179880Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:18e9f2b6e4d67c24941e14b2d41ec0aa6e5f628e39f2ef2163e176de85bbe39e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 09:57:56.645553 containerd[1482]: time="2026-04-21T09:57:56.645491760Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.11\" with image id \"sha256:51b83c5cb2f791f72696c040be904535bad3c81a6ffc19a55013ac150a24d9b0\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:18e9f2b6e4d67c24941e14b2d41ec0aa6e5f628e39f2ef2163e176de85bbe39e\", size \"27005386\" in 1.450237s" Apr 21 09:57:56.645636 containerd[1482]: time="2026-04-21T09:57:56.645560760Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.11\" returns image reference \"sha256:51b83c5cb2f791f72696c040be904535bad3c81a6ffc19a55013ac150a24d9b0\"" Apr 21 09:57:56.646572 containerd[1482]: time="2026-04-21T09:57:56.646540400Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.11\"" Apr 21 09:57:57.651613 containerd[1482]: time="2026-04-21T09:57:57.651540360Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 09:57:57.653913 containerd[1482]: time="2026-04-21T09:57:57.653785240Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.11: active requests=0, bytes read=23297794" Apr 21 09:57:57.654932 containerd[1482]: time="2026-04-21T09:57:57.654838840Z" level=info msg="ImageCreate event name:\"sha256:df8bcecad66863646fb4016494163838761da38376bae5a7592e04041db8489a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 09:57:57.659643 containerd[1482]: time="2026-04-21T09:57:57.659073600Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:7579451c5b3c2715da4a263c5d80a3367a24fdc12e86fde6851674d567d1dfb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 09:57:57.661886 containerd[1482]: time="2026-04-21T09:57:57.661495280Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.11\" with image id \"sha256:df8bcecad66863646fb4016494163838761da38376bae5a7592e04041db8489a\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:7579451c5b3c2715da4a263c5d80a3367a24fdc12e86fde6851674d567d1dfb2\", size \"24804413\" in 1.01478716s" Apr 21 09:57:57.661886 containerd[1482]: time="2026-04-21T09:57:57.661557400Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.11\" returns image reference \"sha256:df8bcecad66863646fb4016494163838761da38376bae5a7592e04041db8489a\"" Apr 21 09:57:57.662801 containerd[1482]: time="2026-04-21T09:57:57.662611240Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.11\"" Apr 21 09:57:58.607234 containerd[1482]: time="2026-04-21T09:57:58.606796280Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 09:57:58.608005 containerd[1482]: time="2026-04-21T09:57:58.607976960Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.11: active requests=0, bytes read=18141378" Apr 21 09:57:58.609093 containerd[1482]: time="2026-04-21T09:57:58.608648520Z" level=info msg="ImageCreate event name:\"sha256:8c8e25fd00e5c108fb9ab5490c25bfaeb0231b1c59f749dab4f5300f1c49995b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 09:57:58.612274 containerd[1482]: time="2026-04-21T09:57:58.612221800Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:5506f0f94c4d9aeb071664893aabc12166bcb7f775008a6fff02d004e6091d28\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 09:57:58.613600 containerd[1482]: time="2026-04-21T09:57:58.613559280Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.11\" with image id \"sha256:8c8e25fd00e5c108fb9ab5490c25bfaeb0231b1c59f749dab4f5300f1c49995b\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:5506f0f94c4d9aeb071664893aabc12166bcb7f775008a6fff02d004e6091d28\", size \"19648015\" in 950.91424ms" Apr 21 09:57:58.613672 containerd[1482]: time="2026-04-21T09:57:58.613605760Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.11\" returns image reference \"sha256:8c8e25fd00e5c108fb9ab5490c25bfaeb0231b1c59f749dab4f5300f1c49995b\"" Apr 21 09:57:58.615960 containerd[1482]: time="2026-04-21T09:57:58.615920240Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.11\"" Apr 21 09:57:59.473132 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2196248295.mount: Deactivated successfully. Apr 21 09:57:59.474385 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 21 09:57:59.484163 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 21 09:57:59.614632 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 21 09:57:59.625135 (kubelet)[1901]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 21 09:57:59.665915 kubelet[1901]: E0421 09:57:59.665626 1901 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 21 09:57:59.669104 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 21 09:57:59.669240 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 21 09:57:59.846428 containerd[1482]: time="2026-04-21T09:57:59.846270840Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 09:57:59.848115 containerd[1482]: time="2026-04-21T09:57:59.848066600Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.11: active requests=0, bytes read=28040534" Apr 21 09:57:59.849153 containerd[1482]: time="2026-04-21T09:57:59.849056640Z" level=info msg="ImageCreate event name:\"sha256:7ce14d6fb1e5134a578d2aaa327fd701273e3d222b9b8d88054dd86b87a7dc36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 09:57:59.851279 containerd[1482]: time="2026-04-21T09:57:59.851226520Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:8d18637b5c5f58a4ca0163d3cf184e53d4c522963c242860562be7cb25e9303e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 09:57:59.851878 containerd[1482]: time="2026-04-21T09:57:59.851778880Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.11\" with image id \"sha256:7ce14d6fb1e5134a578d2aaa327fd701273e3d222b9b8d88054dd86b87a7dc36\", repo tag \"registry.k8s.io/kube-proxy:v1.33.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:8d18637b5c5f58a4ca0163d3cf184e53d4c522963c242860562be7cb25e9303e\", size \"28039527\" in 1.23580164s" Apr 21 09:57:59.851878 containerd[1482]: time="2026-04-21T09:57:59.851865480Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.11\" returns image reference \"sha256:7ce14d6fb1e5134a578d2aaa327fd701273e3d222b9b8d88054dd86b87a7dc36\"" Apr 21 09:57:59.852758 containerd[1482]: time="2026-04-21T09:57:59.852383160Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Apr 21 09:58:00.380242 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount160361218.mount: Deactivated successfully. Apr 21 09:58:01.120231 containerd[1482]: time="2026-04-21T09:58:01.120166920Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 09:58:01.121948 containerd[1482]: time="2026-04-21T09:58:01.121894480Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=19152209" Apr 21 09:58:01.123450 containerd[1482]: time="2026-04-21T09:58:01.123387080Z" level=info msg="ImageCreate event name:\"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 09:58:01.127495 containerd[1482]: time="2026-04-21T09:58:01.127430240Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 09:58:01.128929 containerd[1482]: time="2026-04-21T09:58:01.128682280Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"19148915\" in 1.27626104s" Apr 21 09:58:01.128929 containerd[1482]: time="2026-04-21T09:58:01.128728000Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\"" Apr 21 09:58:01.129279 containerd[1482]: time="2026-04-21T09:58:01.129255080Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Apr 21 09:58:01.633362 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2789613480.mount: Deactivated successfully. Apr 21 09:58:01.649910 containerd[1482]: time="2026-04-21T09:58:01.648942720Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 09:58:01.650533 containerd[1482]: time="2026-04-21T09:58:01.650499200Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268723" Apr 21 09:58:01.652710 containerd[1482]: time="2026-04-21T09:58:01.652627120Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 09:58:01.656010 containerd[1482]: time="2026-04-21T09:58:01.655869520Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 09:58:01.657136 containerd[1482]: time="2026-04-21T09:58:01.656716240Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 527.35596ms" Apr 21 09:58:01.657136 containerd[1482]: time="2026-04-21T09:58:01.656756920Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Apr 21 09:58:01.657657 containerd[1482]: time="2026-04-21T09:58:01.657629280Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\"" Apr 21 09:58:02.180008 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4212598252.mount: Deactivated successfully. Apr 21 09:58:02.915586 containerd[1482]: time="2026-04-21T09:58:02.915499280Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.24-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 09:58:02.918762 containerd[1482]: time="2026-04-21T09:58:02.918592360Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.24-0: active requests=0, bytes read=21886470" Apr 21 09:58:02.921225 containerd[1482]: time="2026-04-21T09:58:02.921145000Z" level=info msg="ImageCreate event name:\"sha256:1211402d28f5813ed906916bfcdd0a7404c2f9048ef5bb54387a6745bc410eca\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 09:58:02.926144 containerd[1482]: time="2026-04-21T09:58:02.926067440Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 09:58:02.928741 containerd[1482]: time="2026-04-21T09:58:02.928505240Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.24-0\" with image id \"sha256:1211402d28f5813ed906916bfcdd0a7404c2f9048ef5bb54387a6745bc410eca\", repo tag \"registry.k8s.io/etcd:3.5.24-0\", repo digest \"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\", size \"21882972\" in 1.27075192s" Apr 21 09:58:02.928741 containerd[1482]: time="2026-04-21T09:58:02.928556800Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\" returns image reference \"sha256:1211402d28f5813ed906916bfcdd0a7404c2f9048ef5bb54387a6745bc410eca\"" Apr 21 09:58:07.652973 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 21 09:58:07.661220 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 21 09:58:07.696435 systemd[1]: Reloading requested from client PID 2055 ('systemctl') (unit session-7.scope)... Apr 21 09:58:07.696450 systemd[1]: Reloading... Apr 21 09:58:07.796164 zram_generator::config[2093]: No configuration found. Apr 21 09:58:07.907186 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 21 09:58:07.978166 systemd[1]: Reloading finished in 281 ms. Apr 21 09:58:08.023994 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Apr 21 09:58:08.024069 systemd[1]: kubelet.service: Failed with result 'signal'. Apr 21 09:58:08.025871 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 21 09:58:08.033740 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 21 09:58:08.146175 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 21 09:58:08.151568 (kubelet)[2143]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 21 09:58:08.191427 kubelet[2143]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 21 09:58:08.191427 kubelet[2143]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 21 09:58:08.191427 kubelet[2143]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 21 09:58:08.191427 kubelet[2143]: I0421 09:58:08.191376 2143 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 21 09:58:08.889522 kubelet[2143]: I0421 09:58:08.889467 2143 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Apr 21 09:58:08.889522 kubelet[2143]: I0421 09:58:08.889504 2143 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 21 09:58:08.889855 kubelet[2143]: I0421 09:58:08.889807 2143 server.go:956] "Client rotation is on, will bootstrap in background" Apr 21 09:58:08.916317 kubelet[2143]: E0421 09:58:08.916275 2143 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://178.104.221.144:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 178.104.221.144:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 21 09:58:08.917876 kubelet[2143]: I0421 09:58:08.917456 2143 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 21 09:58:08.928526 kubelet[2143]: E0421 09:58:08.928465 2143 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 21 09:58:08.928526 kubelet[2143]: I0421 09:58:08.928517 2143 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Apr 21 09:58:08.934968 kubelet[2143]: I0421 09:58:08.934762 2143 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 21 09:58:08.936072 kubelet[2143]: I0421 09:58:08.935982 2143 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 21 09:58:08.936179 kubelet[2143]: I0421 09:58:08.936034 2143 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-7-d-6a70a4c656","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 21 09:58:08.936349 kubelet[2143]: I0421 09:58:08.936188 2143 topology_manager.go:138] "Creating topology manager with none policy" Apr 21 09:58:08.936349 kubelet[2143]: I0421 09:58:08.936198 2143 container_manager_linux.go:303] "Creating device plugin manager" Apr 21 09:58:08.936447 kubelet[2143]: I0421 09:58:08.936402 2143 state_mem.go:36] "Initialized new in-memory state store" Apr 21 09:58:08.941589 kubelet[2143]: I0421 09:58:08.941443 2143 kubelet.go:480] "Attempting to sync node with API server" Apr 21 09:58:08.941589 kubelet[2143]: I0421 09:58:08.941475 2143 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 21 09:58:08.941589 kubelet[2143]: I0421 09:58:08.941504 2143 kubelet.go:386] "Adding apiserver pod source" Apr 21 09:58:08.943441 kubelet[2143]: I0421 09:58:08.943056 2143 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 21 09:58:08.948267 kubelet[2143]: E0421 09:58:08.948232 2143 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://178.104.221.144:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-7-d-6a70a4c656&limit=500&resourceVersion=0\": dial tcp 178.104.221.144:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 21 09:58:08.950432 kubelet[2143]: E0421 09:58:08.950397 2143 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://178.104.221.144:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 178.104.221.144:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 21 09:58:08.950727 kubelet[2143]: I0421 09:58:08.950528 2143 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 21 09:58:08.951294 kubelet[2143]: I0421 09:58:08.951235 2143 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 21 09:58:08.951396 kubelet[2143]: W0421 09:58:08.951382 2143 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 21 09:58:08.955804 kubelet[2143]: I0421 09:58:08.955762 2143 watchdog_linux.go:99] "Systemd watchdog is not enabled" Apr 21 09:58:08.955983 kubelet[2143]: I0421 09:58:08.955967 2143 server.go:1289] "Started kubelet" Apr 21 09:58:08.960598 kubelet[2143]: I0421 09:58:08.960382 2143 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 21 09:58:08.961263 kubelet[2143]: E0421 09:58:08.959900 2143 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://178.104.221.144:6443/api/v1/namespaces/default/events\": dial tcp 178.104.221.144:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-3-7-d-6a70a4c656.18a856cc6a61f848 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-7-d-6a70a4c656,UID:ci-4081-3-7-d-6a70a4c656,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-3-7-d-6a70a4c656,},FirstTimestamp:2026-04-21 09:58:08.95578324 +0000 UTC m=+0.800827601,LastTimestamp:2026-04-21 09:58:08.95578324 +0000 UTC m=+0.800827601,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-7-d-6a70a4c656,}" Apr 21 09:58:08.964307 kubelet[2143]: I0421 09:58:08.963983 2143 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 21 09:58:08.964882 kubelet[2143]: I0421 09:58:08.964851 2143 server.go:317] "Adding debug handlers to kubelet server" Apr 21 09:58:08.968085 kubelet[2143]: I0421 09:58:08.968017 2143 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 21 09:58:08.968282 kubelet[2143]: I0421 09:58:08.968259 2143 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 21 09:58:08.968565 kubelet[2143]: I0421 09:58:08.968537 2143 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 21 09:58:08.970759 kubelet[2143]: I0421 09:58:08.970445 2143 volume_manager.go:297] "Starting Kubelet Volume Manager" Apr 21 09:58:08.970759 kubelet[2143]: I0421 09:58:08.970566 2143 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Apr 21 09:58:08.970759 kubelet[2143]: I0421 09:58:08.970616 2143 reconciler.go:26] "Reconciler: start to sync state" Apr 21 09:58:08.971070 kubelet[2143]: E0421 09:58:08.971023 2143 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://178.104.221.144:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 178.104.221.144:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 21 09:58:08.971234 kubelet[2143]: E0421 09:58:08.971206 2143 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-7-d-6a70a4c656\" not found" Apr 21 09:58:08.971312 kubelet[2143]: E0421 09:58:08.971277 2143 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://178.104.221.144:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-7-d-6a70a4c656?timeout=10s\": dial tcp 178.104.221.144:6443: connect: connection refused" interval="200ms" Apr 21 09:58:08.972187 kubelet[2143]: I0421 09:58:08.972150 2143 factory.go:223] Registration of the systemd container factory successfully Apr 21 09:58:08.972769 kubelet[2143]: I0421 09:58:08.972348 2143 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 21 09:58:08.973878 kubelet[2143]: E0421 09:58:08.973766 2143 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 21 09:58:08.974108 kubelet[2143]: I0421 09:58:08.974085 2143 factory.go:223] Registration of the containerd container factory successfully Apr 21 09:58:08.986264 kubelet[2143]: I0421 09:58:08.986243 2143 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 21 09:58:08.986396 kubelet[2143]: I0421 09:58:08.986385 2143 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 21 09:58:08.986533 kubelet[2143]: I0421 09:58:08.986473 2143 state_mem.go:36] "Initialized new in-memory state store" Apr 21 09:58:08.991678 kubelet[2143]: I0421 09:58:08.991640 2143 policy_none.go:49] "None policy: Start" Apr 21 09:58:08.991678 kubelet[2143]: I0421 09:58:08.991677 2143 memory_manager.go:186] "Starting memorymanager" policy="None" Apr 21 09:58:08.991795 kubelet[2143]: I0421 09:58:08.991691 2143 state_mem.go:35] "Initializing new in-memory state store" Apr 21 09:58:08.996219 kubelet[2143]: I0421 09:58:08.996174 2143 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Apr 21 09:58:08.997954 kubelet[2143]: I0421 09:58:08.997919 2143 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Apr 21 09:58:08.998038 kubelet[2143]: I0421 09:58:08.997961 2143 status_manager.go:230] "Starting to sync pod status with apiserver" Apr 21 09:58:08.998038 kubelet[2143]: I0421 09:58:08.997987 2143 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 21 09:58:08.998038 kubelet[2143]: I0421 09:58:08.997995 2143 kubelet.go:2436] "Starting kubelet main sync loop" Apr 21 09:58:08.998106 kubelet[2143]: E0421 09:58:08.998044 2143 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 21 09:58:09.002483 kubelet[2143]: E0421 09:58:09.002300 2143 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://178.104.221.144:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 178.104.221.144:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 21 09:58:09.006800 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Apr 21 09:58:09.019283 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Apr 21 09:58:09.023612 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Apr 21 09:58:09.041607 kubelet[2143]: E0421 09:58:09.040642 2143 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 21 09:58:09.041607 kubelet[2143]: I0421 09:58:09.041048 2143 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 21 09:58:09.041607 kubelet[2143]: I0421 09:58:09.041072 2143 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 21 09:58:09.041607 kubelet[2143]: I0421 09:58:09.041431 2143 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 21 09:58:09.044615 kubelet[2143]: E0421 09:58:09.044589 2143 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 21 09:58:09.045020 kubelet[2143]: E0421 09:58:09.044996 2143 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081-3-7-d-6a70a4c656\" not found" Apr 21 09:58:09.115338 systemd[1]: Created slice kubepods-burstable-pod9a7eec9103a0a9bf8ff79a225feff015.slice - libcontainer container kubepods-burstable-pod9a7eec9103a0a9bf8ff79a225feff015.slice. Apr 21 09:58:09.135033 kubelet[2143]: E0421 09:58:09.134998 2143 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-7-d-6a70a4c656\" not found" node="ci-4081-3-7-d-6a70a4c656" Apr 21 09:58:09.138618 systemd[1]: Created slice kubepods-burstable-pod82bebe03c5def7bb1c0cf8f597e44325.slice - libcontainer container kubepods-burstable-pod82bebe03c5def7bb1c0cf8f597e44325.slice. Apr 21 09:58:09.141256 kubelet[2143]: E0421 09:58:09.141162 2143 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-7-d-6a70a4c656\" not found" node="ci-4081-3-7-d-6a70a4c656" Apr 21 09:58:09.144385 kubelet[2143]: I0421 09:58:09.144351 2143 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-7-d-6a70a4c656" Apr 21 09:58:09.145686 kubelet[2143]: E0421 09:58:09.145635 2143 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://178.104.221.144:6443/api/v1/nodes\": dial tcp 178.104.221.144:6443: connect: connection refused" node="ci-4081-3-7-d-6a70a4c656" Apr 21 09:58:09.146260 systemd[1]: Created slice kubepods-burstable-pod618b7ccdbfbfa284dc9c433aedcb84f5.slice - libcontainer container kubepods-burstable-pod618b7ccdbfbfa284dc9c433aedcb84f5.slice. Apr 21 09:58:09.148095 kubelet[2143]: E0421 09:58:09.148049 2143 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-7-d-6a70a4c656\" not found" node="ci-4081-3-7-d-6a70a4c656" Apr 21 09:58:09.171916 kubelet[2143]: I0421 09:58:09.171843 2143 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/618b7ccdbfbfa284dc9c433aedcb84f5-kubeconfig\") pod \"kube-scheduler-ci-4081-3-7-d-6a70a4c656\" (UID: \"618b7ccdbfbfa284dc9c433aedcb84f5\") " pod="kube-system/kube-scheduler-ci-4081-3-7-d-6a70a4c656" Apr 21 09:58:09.171916 kubelet[2143]: I0421 09:58:09.171902 2143 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9a7eec9103a0a9bf8ff79a225feff015-k8s-certs\") pod \"kube-apiserver-ci-4081-3-7-d-6a70a4c656\" (UID: \"9a7eec9103a0a9bf8ff79a225feff015\") " pod="kube-system/kube-apiserver-ci-4081-3-7-d-6a70a4c656" Apr 21 09:58:09.171916 kubelet[2143]: I0421 09:58:09.171927 2143 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9a7eec9103a0a9bf8ff79a225feff015-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-7-d-6a70a4c656\" (UID: \"9a7eec9103a0a9bf8ff79a225feff015\") " pod="kube-system/kube-apiserver-ci-4081-3-7-d-6a70a4c656" Apr 21 09:58:09.172148 kubelet[2143]: I0421 09:58:09.171950 2143 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/82bebe03c5def7bb1c0cf8f597e44325-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-7-d-6a70a4c656\" (UID: \"82bebe03c5def7bb1c0cf8f597e44325\") " pod="kube-system/kube-controller-manager-ci-4081-3-7-d-6a70a4c656" Apr 21 09:58:09.172148 kubelet[2143]: I0421 09:58:09.171974 2143 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/82bebe03c5def7bb1c0cf8f597e44325-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-7-d-6a70a4c656\" (UID: \"82bebe03c5def7bb1c0cf8f597e44325\") " pod="kube-system/kube-controller-manager-ci-4081-3-7-d-6a70a4c656" Apr 21 09:58:09.172148 kubelet[2143]: I0421 09:58:09.171993 2143 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9a7eec9103a0a9bf8ff79a225feff015-ca-certs\") pod \"kube-apiserver-ci-4081-3-7-d-6a70a4c656\" (UID: \"9a7eec9103a0a9bf8ff79a225feff015\") " pod="kube-system/kube-apiserver-ci-4081-3-7-d-6a70a4c656" Apr 21 09:58:09.172148 kubelet[2143]: I0421 09:58:09.172013 2143 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/82bebe03c5def7bb1c0cf8f597e44325-ca-certs\") pod \"kube-controller-manager-ci-4081-3-7-d-6a70a4c656\" (UID: \"82bebe03c5def7bb1c0cf8f597e44325\") " pod="kube-system/kube-controller-manager-ci-4081-3-7-d-6a70a4c656" Apr 21 09:58:09.172148 kubelet[2143]: I0421 09:58:09.172032 2143 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/82bebe03c5def7bb1c0cf8f597e44325-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-7-d-6a70a4c656\" (UID: \"82bebe03c5def7bb1c0cf8f597e44325\") " pod="kube-system/kube-controller-manager-ci-4081-3-7-d-6a70a4c656" Apr 21 09:58:09.172345 kubelet[2143]: I0421 09:58:09.172054 2143 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/82bebe03c5def7bb1c0cf8f597e44325-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-7-d-6a70a4c656\" (UID: \"82bebe03c5def7bb1c0cf8f597e44325\") " pod="kube-system/kube-controller-manager-ci-4081-3-7-d-6a70a4c656" Apr 21 09:58:09.172550 kubelet[2143]: E0421 09:58:09.172492 2143 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://178.104.221.144:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-7-d-6a70a4c656?timeout=10s\": dial tcp 178.104.221.144:6443: connect: connection refused" interval="400ms" Apr 21 09:58:09.349833 kubelet[2143]: I0421 09:58:09.349150 2143 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-7-d-6a70a4c656" Apr 21 09:58:09.349833 kubelet[2143]: E0421 09:58:09.349556 2143 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://178.104.221.144:6443/api/v1/nodes\": dial tcp 178.104.221.144:6443: connect: connection refused" node="ci-4081-3-7-d-6a70a4c656" Apr 21 09:58:09.438147 containerd[1482]: time="2026-04-21T09:58:09.437658120Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-7-d-6a70a4c656,Uid:9a7eec9103a0a9bf8ff79a225feff015,Namespace:kube-system,Attempt:0,}" Apr 21 09:58:09.443637 containerd[1482]: time="2026-04-21T09:58:09.443465520Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-7-d-6a70a4c656,Uid:82bebe03c5def7bb1c0cf8f597e44325,Namespace:kube-system,Attempt:0,}" Apr 21 09:58:09.449624 containerd[1482]: time="2026-04-21T09:58:09.449360920Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-7-d-6a70a4c656,Uid:618b7ccdbfbfa284dc9c433aedcb84f5,Namespace:kube-system,Attempt:0,}" Apr 21 09:58:09.573808 kubelet[2143]: E0421 09:58:09.573672 2143 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://178.104.221.144:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-7-d-6a70a4c656?timeout=10s\": dial tcp 178.104.221.144:6443: connect: connection refused" interval="800ms" Apr 21 09:58:09.752615 kubelet[2143]: I0421 09:58:09.752513 2143 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-7-d-6a70a4c656" Apr 21 09:58:09.753173 kubelet[2143]: E0421 09:58:09.753138 2143 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://178.104.221.144:6443/api/v1/nodes\": dial tcp 178.104.221.144:6443: connect: connection refused" node="ci-4081-3-7-d-6a70a4c656" Apr 21 09:58:09.874910 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3761720239.mount: Deactivated successfully. Apr 21 09:58:09.882765 containerd[1482]: time="2026-04-21T09:58:09.881888680Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 21 09:58:09.883900 containerd[1482]: time="2026-04-21T09:58:09.883860320Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269193" Apr 21 09:58:09.888377 containerd[1482]: time="2026-04-21T09:58:09.887435920Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 21 09:58:09.889843 containerd[1482]: time="2026-04-21T09:58:09.888871080Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 21 09:58:09.889843 containerd[1482]: time="2026-04-21T09:58:09.889066440Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 21 09:58:09.891606 containerd[1482]: time="2026-04-21T09:58:09.891371800Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 21 09:58:09.891606 containerd[1482]: time="2026-04-21T09:58:09.891538480Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 21 09:58:09.896442 containerd[1482]: time="2026-04-21T09:58:09.896386920Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 21 09:58:09.897707 containerd[1482]: time="2026-04-21T09:58:09.897448040Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 448.0088ms" Apr 21 09:58:09.899332 containerd[1482]: time="2026-04-21T09:58:09.899295640Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 461.519ms" Apr 21 09:58:09.900467 containerd[1482]: time="2026-04-21T09:58:09.900394320Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 456.84704ms" Apr 21 09:58:09.922475 kubelet[2143]: E0421 09:58:09.922430 2143 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://178.104.221.144:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 178.104.221.144:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 21 09:58:10.027526 containerd[1482]: time="2026-04-21T09:58:10.027232800Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 09:58:10.027526 containerd[1482]: time="2026-04-21T09:58:10.027313000Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 09:58:10.027526 containerd[1482]: time="2026-04-21T09:58:10.027328320Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 09:58:10.029431 containerd[1482]: time="2026-04-21T09:58:10.027749800Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 09:58:10.031301 containerd[1482]: time="2026-04-21T09:58:10.031066240Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 09:58:10.031301 containerd[1482]: time="2026-04-21T09:58:10.031120760Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 09:58:10.031301 containerd[1482]: time="2026-04-21T09:58:10.031138600Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 09:58:10.031301 containerd[1482]: time="2026-04-21T09:58:10.031218320Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 09:58:10.031529 containerd[1482]: time="2026-04-21T09:58:10.031430640Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 09:58:10.031667 containerd[1482]: time="2026-04-21T09:58:10.031615960Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 09:58:10.032838 containerd[1482]: time="2026-04-21T09:58:10.031740200Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 09:58:10.033130 containerd[1482]: time="2026-04-21T09:58:10.032952240Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 09:58:10.054231 systemd[1]: Started cri-containerd-d4a2e55da05c6a31ccd30c8c90d83825802ddd0f3177c76a9e0e300941a069ec.scope - libcontainer container d4a2e55da05c6a31ccd30c8c90d83825802ddd0f3177c76a9e0e300941a069ec. Apr 21 09:58:10.060246 systemd[1]: Started cri-containerd-e52b0ab26eabe3e0eb205881eb39b4b1611de322eb6f9a569da303eb4195849e.scope - libcontainer container e52b0ab26eabe3e0eb205881eb39b4b1611de322eb6f9a569da303eb4195849e. Apr 21 09:58:10.064102 systemd[1]: Started cri-containerd-a29e93be4b815d45c37047ba51f6256913683506179a73534fda5d9a377cd22d.scope - libcontainer container a29e93be4b815d45c37047ba51f6256913683506179a73534fda5d9a377cd22d. Apr 21 09:58:10.111890 containerd[1482]: time="2026-04-21T09:58:10.110989000Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-7-d-6a70a4c656,Uid:9a7eec9103a0a9bf8ff79a225feff015,Namespace:kube-system,Attempt:0,} returns sandbox id \"e52b0ab26eabe3e0eb205881eb39b4b1611de322eb6f9a569da303eb4195849e\"" Apr 21 09:58:10.116479 containerd[1482]: time="2026-04-21T09:58:10.116427160Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-7-d-6a70a4c656,Uid:618b7ccdbfbfa284dc9c433aedcb84f5,Namespace:kube-system,Attempt:0,} returns sandbox id \"d4a2e55da05c6a31ccd30c8c90d83825802ddd0f3177c76a9e0e300941a069ec\"" Apr 21 09:58:10.121488 containerd[1482]: time="2026-04-21T09:58:10.121449280Z" level=info msg="CreateContainer within sandbox \"e52b0ab26eabe3e0eb205881eb39b4b1611de322eb6f9a569da303eb4195849e\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Apr 21 09:58:10.122876 containerd[1482]: time="2026-04-21T09:58:10.122840920Z" level=info msg="CreateContainer within sandbox \"d4a2e55da05c6a31ccd30c8c90d83825802ddd0f3177c76a9e0e300941a069ec\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Apr 21 09:58:10.127564 containerd[1482]: time="2026-04-21T09:58:10.127518160Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-7-d-6a70a4c656,Uid:82bebe03c5def7bb1c0cf8f597e44325,Namespace:kube-system,Attempt:0,} returns sandbox id \"a29e93be4b815d45c37047ba51f6256913683506179a73534fda5d9a377cd22d\"" Apr 21 09:58:10.132778 containerd[1482]: time="2026-04-21T09:58:10.132584560Z" level=info msg="CreateContainer within sandbox \"a29e93be4b815d45c37047ba51f6256913683506179a73534fda5d9a377cd22d\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Apr 21 09:58:10.156253 containerd[1482]: time="2026-04-21T09:58:10.156211640Z" level=info msg="CreateContainer within sandbox \"d4a2e55da05c6a31ccd30c8c90d83825802ddd0f3177c76a9e0e300941a069ec\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"a19c6744f3ab42807719745a23f89e405293d30880d18161d5b495d678175fe9\"" Apr 21 09:58:10.157877 containerd[1482]: time="2026-04-21T09:58:10.157625880Z" level=info msg="StartContainer for \"a19c6744f3ab42807719745a23f89e405293d30880d18161d5b495d678175fe9\"" Apr 21 09:58:10.158823 containerd[1482]: time="2026-04-21T09:58:10.158785880Z" level=info msg="CreateContainer within sandbox \"e52b0ab26eabe3e0eb205881eb39b4b1611de322eb6f9a569da303eb4195849e\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"b66d55249908f68ffdc54f06199d1911eb63d5e357fee47b944d2d5f7cc9fd71\"" Apr 21 09:58:10.159318 containerd[1482]: time="2026-04-21T09:58:10.159292320Z" level=info msg="StartContainer for \"b66d55249908f68ffdc54f06199d1911eb63d5e357fee47b944d2d5f7cc9fd71\"" Apr 21 09:58:10.163070 containerd[1482]: time="2026-04-21T09:58:10.163030520Z" level=info msg="CreateContainer within sandbox \"a29e93be4b815d45c37047ba51f6256913683506179a73534fda5d9a377cd22d\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"26101973a8719898552e645b4b9870f1a928ca90414183785b76b89090639e3d\"" Apr 21 09:58:10.164756 containerd[1482]: time="2026-04-21T09:58:10.163659720Z" level=info msg="StartContainer for \"26101973a8719898552e645b4b9870f1a928ca90414183785b76b89090639e3d\"" Apr 21 09:58:10.189271 kubelet[2143]: E0421 09:58:10.189227 2143 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://178.104.221.144:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 178.104.221.144:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 21 09:58:10.199991 systemd[1]: Started cri-containerd-26101973a8719898552e645b4b9870f1a928ca90414183785b76b89090639e3d.scope - libcontainer container 26101973a8719898552e645b4b9870f1a928ca90414183785b76b89090639e3d. Apr 21 09:58:10.202438 systemd[1]: Started cri-containerd-a19c6744f3ab42807719745a23f89e405293d30880d18161d5b495d678175fe9.scope - libcontainer container a19c6744f3ab42807719745a23f89e405293d30880d18161d5b495d678175fe9. Apr 21 09:58:10.207616 systemd[1]: Started cri-containerd-b66d55249908f68ffdc54f06199d1911eb63d5e357fee47b944d2d5f7cc9fd71.scope - libcontainer container b66d55249908f68ffdc54f06199d1911eb63d5e357fee47b944d2d5f7cc9fd71. Apr 21 09:58:10.259769 containerd[1482]: time="2026-04-21T09:58:10.258982880Z" level=info msg="StartContainer for \"a19c6744f3ab42807719745a23f89e405293d30880d18161d5b495d678175fe9\" returns successfully" Apr 21 09:58:10.270359 containerd[1482]: time="2026-04-21T09:58:10.270044800Z" level=info msg="StartContainer for \"26101973a8719898552e645b4b9870f1a928ca90414183785b76b89090639e3d\" returns successfully" Apr 21 09:58:10.273126 containerd[1482]: time="2026-04-21T09:58:10.273093640Z" level=info msg="StartContainer for \"b66d55249908f68ffdc54f06199d1911eb63d5e357fee47b944d2d5f7cc9fd71\" returns successfully" Apr 21 09:58:10.302729 kubelet[2143]: E0421 09:58:10.302577 2143 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://178.104.221.144:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-7-d-6a70a4c656&limit=500&resourceVersion=0\": dial tcp 178.104.221.144:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 21 09:58:10.374880 kubelet[2143]: E0421 09:58:10.374811 2143 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://178.104.221.144:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-7-d-6a70a4c656?timeout=10s\": dial tcp 178.104.221.144:6443: connect: connection refused" interval="1.6s" Apr 21 09:58:10.389971 kubelet[2143]: E0421 09:58:10.389921 2143 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://178.104.221.144:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 178.104.221.144:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 21 09:58:10.556563 kubelet[2143]: I0421 09:58:10.556062 2143 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-7-d-6a70a4c656" Apr 21 09:58:11.013496 kubelet[2143]: E0421 09:58:11.012809 2143 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-7-d-6a70a4c656\" not found" node="ci-4081-3-7-d-6a70a4c656" Apr 21 09:58:11.016586 kubelet[2143]: E0421 09:58:11.016235 2143 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-7-d-6a70a4c656\" not found" node="ci-4081-3-7-d-6a70a4c656" Apr 21 09:58:11.023338 kubelet[2143]: E0421 09:58:11.022143 2143 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-7-d-6a70a4c656\" not found" node="ci-4081-3-7-d-6a70a4c656" Apr 21 09:58:12.024395 kubelet[2143]: E0421 09:58:12.024355 2143 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-7-d-6a70a4c656\" not found" node="ci-4081-3-7-d-6a70a4c656" Apr 21 09:58:12.024761 kubelet[2143]: E0421 09:58:12.024597 2143 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-7-d-6a70a4c656\" not found" node="ci-4081-3-7-d-6a70a4c656" Apr 21 09:58:12.293393 kubelet[2143]: E0421 09:58:12.293274 2143 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081-3-7-d-6a70a4c656\" not found" node="ci-4081-3-7-d-6a70a4c656" Apr 21 09:58:12.388833 kubelet[2143]: I0421 09:58:12.386872 2143 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081-3-7-d-6a70a4c656" Apr 21 09:58:12.388833 kubelet[2143]: E0421 09:58:12.386909 2143 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4081-3-7-d-6a70a4c656\": node \"ci-4081-3-7-d-6a70a4c656\" not found" Apr 21 09:58:12.405628 kubelet[2143]: E0421 09:58:12.405584 2143 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-7-d-6a70a4c656\" not found" Apr 21 09:58:12.506633 kubelet[2143]: E0421 09:58:12.506581 2143 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-7-d-6a70a4c656\" not found" Apr 21 09:58:12.607659 kubelet[2143]: E0421 09:58:12.607531 2143 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-7-d-6a70a4c656\" not found" Apr 21 09:58:12.708091 kubelet[2143]: E0421 09:58:12.708032 2143 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-7-d-6a70a4c656\" not found" Apr 21 09:58:12.808757 kubelet[2143]: E0421 09:58:12.808704 2143 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-7-d-6a70a4c656\" not found" Apr 21 09:58:12.909781 kubelet[2143]: E0421 09:58:12.909625 2143 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-7-d-6a70a4c656\" not found" Apr 21 09:58:12.951392 kubelet[2143]: I0421 09:58:12.951350 2143 apiserver.go:52] "Watching apiserver" Apr 21 09:58:12.971912 kubelet[2143]: I0421 09:58:12.971364 2143 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-7-d-6a70a4c656" Apr 21 09:58:12.971912 kubelet[2143]: I0421 09:58:12.971764 2143 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Apr 21 09:58:12.980075 kubelet[2143]: E0421 09:58:12.980038 2143 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081-3-7-d-6a70a4c656\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4081-3-7-d-6a70a4c656" Apr 21 09:58:12.980075 kubelet[2143]: I0421 09:58:12.980073 2143 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081-3-7-d-6a70a4c656" Apr 21 09:58:12.983369 kubelet[2143]: E0421 09:58:12.983317 2143 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081-3-7-d-6a70a4c656\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4081-3-7-d-6a70a4c656" Apr 21 09:58:12.983369 kubelet[2143]: I0421 09:58:12.983368 2143 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-7-d-6a70a4c656" Apr 21 09:58:12.985634 kubelet[2143]: E0421 09:58:12.985593 2143 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081-3-7-d-6a70a4c656\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4081-3-7-d-6a70a4c656" Apr 21 09:58:13.257216 kubelet[2143]: I0421 09:58:13.257180 2143 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081-3-7-d-6a70a4c656" Apr 21 09:58:14.613750 systemd[1]: Reloading requested from client PID 2429 ('systemctl') (unit session-7.scope)... Apr 21 09:58:14.614107 systemd[1]: Reloading... Apr 21 09:58:14.710852 zram_generator::config[2469]: No configuration found. Apr 21 09:58:14.823564 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 21 09:58:14.916320 systemd[1]: Reloading finished in 301 ms. Apr 21 09:58:14.958055 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 21 09:58:14.974980 systemd[1]: kubelet.service: Deactivated successfully. Apr 21 09:58:14.975416 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 21 09:58:14.975520 systemd[1]: kubelet.service: Consumed 1.192s CPU time, 127.9M memory peak, 0B memory swap peak. Apr 21 09:58:14.981212 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 21 09:58:15.106320 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 21 09:58:15.118586 (kubelet)[2514]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 21 09:58:15.170279 kubelet[2514]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 21 09:58:15.170279 kubelet[2514]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 21 09:58:15.170279 kubelet[2514]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 21 09:58:15.170586 kubelet[2514]: I0421 09:58:15.170253 2514 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 21 09:58:15.179304 kubelet[2514]: I0421 09:58:15.179249 2514 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Apr 21 09:58:15.179304 kubelet[2514]: I0421 09:58:15.179291 2514 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 21 09:58:15.179696 kubelet[2514]: I0421 09:58:15.179649 2514 server.go:956] "Client rotation is on, will bootstrap in background" Apr 21 09:58:15.181476 kubelet[2514]: I0421 09:58:15.181449 2514 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Apr 21 09:58:15.184250 kubelet[2514]: I0421 09:58:15.184037 2514 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 21 09:58:15.192457 kubelet[2514]: E0421 09:58:15.192406 2514 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 21 09:58:15.192638 kubelet[2514]: I0421 09:58:15.192597 2514 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Apr 21 09:58:15.195639 kubelet[2514]: I0421 09:58:15.195468 2514 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 21 09:58:15.195731 kubelet[2514]: I0421 09:58:15.195696 2514 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 21 09:58:15.195944 kubelet[2514]: I0421 09:58:15.195720 2514 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-7-d-6a70a4c656","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 21 09:58:15.196034 kubelet[2514]: I0421 09:58:15.195950 2514 topology_manager.go:138] "Creating topology manager with none policy" Apr 21 09:58:15.196034 kubelet[2514]: I0421 09:58:15.195966 2514 container_manager_linux.go:303] "Creating device plugin manager" Apr 21 09:58:15.196034 kubelet[2514]: I0421 09:58:15.196011 2514 state_mem.go:36] "Initialized new in-memory state store" Apr 21 09:58:15.196186 kubelet[2514]: I0421 09:58:15.196176 2514 kubelet.go:480] "Attempting to sync node with API server" Apr 21 09:58:15.196218 kubelet[2514]: I0421 09:58:15.196197 2514 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 21 09:58:15.196253 kubelet[2514]: I0421 09:58:15.196219 2514 kubelet.go:386] "Adding apiserver pod source" Apr 21 09:58:15.196253 kubelet[2514]: I0421 09:58:15.196232 2514 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 21 09:58:15.199077 kubelet[2514]: I0421 09:58:15.199045 2514 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 21 09:58:15.199975 kubelet[2514]: I0421 09:58:15.199943 2514 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 21 09:58:15.202509 kubelet[2514]: I0421 09:58:15.202482 2514 watchdog_linux.go:99] "Systemd watchdog is not enabled" Apr 21 09:58:15.202761 kubelet[2514]: I0421 09:58:15.202652 2514 server.go:1289] "Started kubelet" Apr 21 09:58:15.204826 kubelet[2514]: I0421 09:58:15.204795 2514 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 21 09:58:15.215995 kubelet[2514]: I0421 09:58:15.214927 2514 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 21 09:58:15.216536 kubelet[2514]: I0421 09:58:15.216506 2514 server.go:317] "Adding debug handlers to kubelet server" Apr 21 09:58:15.220464 kubelet[2514]: I0421 09:58:15.220417 2514 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 21 09:58:15.220866 kubelet[2514]: I0421 09:58:15.220844 2514 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 21 09:58:15.221181 kubelet[2514]: I0421 09:58:15.221159 2514 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 21 09:58:15.222857 kubelet[2514]: I0421 09:58:15.222838 2514 volume_manager.go:297] "Starting Kubelet Volume Manager" Apr 21 09:58:15.223152 kubelet[2514]: E0421 09:58:15.223125 2514 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-7-d-6a70a4c656\" not found" Apr 21 09:58:15.225044 kubelet[2514]: I0421 09:58:15.225018 2514 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Apr 21 09:58:15.225256 kubelet[2514]: I0421 09:58:15.225241 2514 reconciler.go:26] "Reconciler: start to sync state" Apr 21 09:58:15.227643 kubelet[2514]: I0421 09:58:15.227590 2514 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Apr 21 09:58:15.228651 kubelet[2514]: I0421 09:58:15.228630 2514 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Apr 21 09:58:15.228742 kubelet[2514]: I0421 09:58:15.228731 2514 status_manager.go:230] "Starting to sync pod status with apiserver" Apr 21 09:58:15.228840 kubelet[2514]: I0421 09:58:15.228802 2514 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 21 09:58:15.228912 kubelet[2514]: I0421 09:58:15.228902 2514 kubelet.go:2436] "Starting kubelet main sync loop" Apr 21 09:58:15.229030 kubelet[2514]: E0421 09:58:15.228996 2514 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 21 09:58:15.244835 kubelet[2514]: E0421 09:58:15.244785 2514 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 21 09:58:15.245271 kubelet[2514]: I0421 09:58:15.245251 2514 factory.go:223] Registration of the containerd container factory successfully Apr 21 09:58:15.245837 kubelet[2514]: I0421 09:58:15.245348 2514 factory.go:223] Registration of the systemd container factory successfully Apr 21 09:58:15.245837 kubelet[2514]: I0421 09:58:15.245434 2514 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 21 09:58:15.302342 kubelet[2514]: I0421 09:58:15.302289 2514 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 21 09:58:15.302533 kubelet[2514]: I0421 09:58:15.302512 2514 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 21 09:58:15.303554 kubelet[2514]: I0421 09:58:15.302590 2514 state_mem.go:36] "Initialized new in-memory state store" Apr 21 09:58:15.303554 kubelet[2514]: I0421 09:58:15.302765 2514 state_mem.go:88] "Updated default CPUSet" cpuSet="" Apr 21 09:58:15.303554 kubelet[2514]: I0421 09:58:15.302777 2514 state_mem.go:96] "Updated CPUSet assignments" assignments={} Apr 21 09:58:15.303554 kubelet[2514]: I0421 09:58:15.302794 2514 policy_none.go:49] "None policy: Start" Apr 21 09:58:15.303554 kubelet[2514]: I0421 09:58:15.302803 2514 memory_manager.go:186] "Starting memorymanager" policy="None" Apr 21 09:58:15.303554 kubelet[2514]: I0421 09:58:15.302812 2514 state_mem.go:35] "Initializing new in-memory state store" Apr 21 09:58:15.303554 kubelet[2514]: I0421 09:58:15.302968 2514 state_mem.go:75] "Updated machine memory state" Apr 21 09:58:15.307277 kubelet[2514]: E0421 09:58:15.307258 2514 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 21 09:58:15.308154 kubelet[2514]: I0421 09:58:15.308135 2514 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 21 09:58:15.308497 kubelet[2514]: I0421 09:58:15.308377 2514 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 21 09:58:15.308850 kubelet[2514]: I0421 09:58:15.308806 2514 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 21 09:58:15.310907 kubelet[2514]: E0421 09:58:15.310885 2514 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 21 09:58:15.330661 kubelet[2514]: I0421 09:58:15.330603 2514 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-7-d-6a70a4c656" Apr 21 09:58:15.331340 kubelet[2514]: I0421 09:58:15.331016 2514 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081-3-7-d-6a70a4c656" Apr 21 09:58:15.331704 kubelet[2514]: I0421 09:58:15.331155 2514 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-7-d-6a70a4c656" Apr 21 09:58:15.343065 kubelet[2514]: E0421 09:58:15.343028 2514 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081-3-7-d-6a70a4c656\" already exists" pod="kube-system/kube-controller-manager-ci-4081-3-7-d-6a70a4c656" Apr 21 09:58:15.412796 kubelet[2514]: I0421 09:58:15.412736 2514 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-7-d-6a70a4c656" Apr 21 09:58:15.424919 kubelet[2514]: I0421 09:58:15.424783 2514 kubelet_node_status.go:124] "Node was previously registered" node="ci-4081-3-7-d-6a70a4c656" Apr 21 09:58:15.425026 kubelet[2514]: I0421 09:58:15.424944 2514 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081-3-7-d-6a70a4c656" Apr 21 09:58:15.427456 kubelet[2514]: I0421 09:58:15.427427 2514 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/82bebe03c5def7bb1c0cf8f597e44325-ca-certs\") pod \"kube-controller-manager-ci-4081-3-7-d-6a70a4c656\" (UID: \"82bebe03c5def7bb1c0cf8f597e44325\") " pod="kube-system/kube-controller-manager-ci-4081-3-7-d-6a70a4c656" Apr 21 09:58:15.427776 kubelet[2514]: I0421 09:58:15.427595 2514 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/82bebe03c5def7bb1c0cf8f597e44325-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-7-d-6a70a4c656\" (UID: \"82bebe03c5def7bb1c0cf8f597e44325\") " pod="kube-system/kube-controller-manager-ci-4081-3-7-d-6a70a4c656" Apr 21 09:58:15.427776 kubelet[2514]: I0421 09:58:15.427738 2514 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/82bebe03c5def7bb1c0cf8f597e44325-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-7-d-6a70a4c656\" (UID: \"82bebe03c5def7bb1c0cf8f597e44325\") " pod="kube-system/kube-controller-manager-ci-4081-3-7-d-6a70a4c656" Apr 21 09:58:15.428038 kubelet[2514]: I0421 09:58:15.427796 2514 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/618b7ccdbfbfa284dc9c433aedcb84f5-kubeconfig\") pod \"kube-scheduler-ci-4081-3-7-d-6a70a4c656\" (UID: \"618b7ccdbfbfa284dc9c433aedcb84f5\") " pod="kube-system/kube-scheduler-ci-4081-3-7-d-6a70a4c656" Apr 21 09:58:15.428038 kubelet[2514]: I0421 09:58:15.427870 2514 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9a7eec9103a0a9bf8ff79a225feff015-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-7-d-6a70a4c656\" (UID: \"9a7eec9103a0a9bf8ff79a225feff015\") " pod="kube-system/kube-apiserver-ci-4081-3-7-d-6a70a4c656" Apr 21 09:58:15.429101 kubelet[2514]: I0421 09:58:15.428569 2514 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/82bebe03c5def7bb1c0cf8f597e44325-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-7-d-6a70a4c656\" (UID: \"82bebe03c5def7bb1c0cf8f597e44325\") " pod="kube-system/kube-controller-manager-ci-4081-3-7-d-6a70a4c656" Apr 21 09:58:15.429101 kubelet[2514]: I0421 09:58:15.428694 2514 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/82bebe03c5def7bb1c0cf8f597e44325-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-7-d-6a70a4c656\" (UID: \"82bebe03c5def7bb1c0cf8f597e44325\") " pod="kube-system/kube-controller-manager-ci-4081-3-7-d-6a70a4c656" Apr 21 09:58:15.429101 kubelet[2514]: I0421 09:58:15.429023 2514 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9a7eec9103a0a9bf8ff79a225feff015-ca-certs\") pod \"kube-apiserver-ci-4081-3-7-d-6a70a4c656\" (UID: \"9a7eec9103a0a9bf8ff79a225feff015\") " pod="kube-system/kube-apiserver-ci-4081-3-7-d-6a70a4c656" Apr 21 09:58:15.429101 kubelet[2514]: I0421 09:58:15.429046 2514 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9a7eec9103a0a9bf8ff79a225feff015-k8s-certs\") pod \"kube-apiserver-ci-4081-3-7-d-6a70a4c656\" (UID: \"9a7eec9103a0a9bf8ff79a225feff015\") " pod="kube-system/kube-apiserver-ci-4081-3-7-d-6a70a4c656" Apr 21 09:58:16.197572 kubelet[2514]: I0421 09:58:16.197507 2514 apiserver.go:52] "Watching apiserver" Apr 21 09:58:16.225353 kubelet[2514]: I0421 09:58:16.225277 2514 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Apr 21 09:58:16.284574 kubelet[2514]: I0421 09:58:16.284523 2514 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-7-d-6a70a4c656" Apr 21 09:58:16.297935 kubelet[2514]: E0421 09:58:16.296869 2514 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081-3-7-d-6a70a4c656\" already exists" pod="kube-system/kube-apiserver-ci-4081-3-7-d-6a70a4c656" Apr 21 09:58:16.321317 kubelet[2514]: I0421 09:58:16.321087 2514 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081-3-7-d-6a70a4c656" podStartSLOduration=3.321071 podStartE2EDuration="3.321071s" podCreationTimestamp="2026-04-21 09:58:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-21 09:58:16.32087892 +0000 UTC m=+1.194628801" watchObservedRunningTime="2026-04-21 09:58:16.321071 +0000 UTC m=+1.194820841" Apr 21 09:58:16.322303 kubelet[2514]: I0421 09:58:16.322052 2514 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081-3-7-d-6a70a4c656" podStartSLOduration=1.32189232 podStartE2EDuration="1.32189232s" podCreationTimestamp="2026-04-21 09:58:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-21 09:58:16.30742888 +0000 UTC m=+1.181178761" watchObservedRunningTime="2026-04-21 09:58:16.32189232 +0000 UTC m=+1.195642201" Apr 21 09:58:16.349398 kubelet[2514]: I0421 09:58:16.349275 2514 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081-3-7-d-6a70a4c656" podStartSLOduration=1.3492578 podStartE2EDuration="1.3492578s" podCreationTimestamp="2026-04-21 09:58:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-21 09:58:16.33462624 +0000 UTC m=+1.208376121" watchObservedRunningTime="2026-04-21 09:58:16.3492578 +0000 UTC m=+1.223007681" Apr 21 09:58:21.274519 kubelet[2514]: I0421 09:58:21.274454 2514 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Apr 21 09:58:21.275861 kubelet[2514]: I0421 09:58:21.275236 2514 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Apr 21 09:58:21.275900 containerd[1482]: time="2026-04-21T09:58:21.274998560Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 21 09:58:21.813623 systemd[1]: Created slice kubepods-besteffort-pod5e25259c_3396_400d_9f42_43d09b420ebb.slice - libcontainer container kubepods-besteffort-pod5e25259c_3396_400d_9f42_43d09b420ebb.slice. Apr 21 09:58:21.875134 kubelet[2514]: I0421 09:58:21.875098 2514 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5e25259c-3396-400d-9f42-43d09b420ebb-lib-modules\") pod \"kube-proxy-pbjkp\" (UID: \"5e25259c-3396-400d-9f42-43d09b420ebb\") " pod="kube-system/kube-proxy-pbjkp" Apr 21 09:58:21.875134 kubelet[2514]: I0421 09:58:21.875139 2514 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bq2gb\" (UniqueName: \"kubernetes.io/projected/5e25259c-3396-400d-9f42-43d09b420ebb-kube-api-access-bq2gb\") pod \"kube-proxy-pbjkp\" (UID: \"5e25259c-3396-400d-9f42-43d09b420ebb\") " pod="kube-system/kube-proxy-pbjkp" Apr 21 09:58:21.875309 kubelet[2514]: I0421 09:58:21.875164 2514 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/5e25259c-3396-400d-9f42-43d09b420ebb-kube-proxy\") pod \"kube-proxy-pbjkp\" (UID: \"5e25259c-3396-400d-9f42-43d09b420ebb\") " pod="kube-system/kube-proxy-pbjkp" Apr 21 09:58:21.875309 kubelet[2514]: I0421 09:58:21.875182 2514 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5e25259c-3396-400d-9f42-43d09b420ebb-xtables-lock\") pod \"kube-proxy-pbjkp\" (UID: \"5e25259c-3396-400d-9f42-43d09b420ebb\") " pod="kube-system/kube-proxy-pbjkp" Apr 21 09:58:21.990153 kubelet[2514]: E0421 09:58:21.989902 2514 projected.go:289] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Apr 21 09:58:21.990153 kubelet[2514]: E0421 09:58:21.989946 2514 projected.go:194] Error preparing data for projected volume kube-api-access-bq2gb for pod kube-system/kube-proxy-pbjkp: configmap "kube-root-ca.crt" not found Apr 21 09:58:21.990153 kubelet[2514]: E0421 09:58:21.990034 2514 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5e25259c-3396-400d-9f42-43d09b420ebb-kube-api-access-bq2gb podName:5e25259c-3396-400d-9f42-43d09b420ebb nodeName:}" failed. No retries permitted until 2026-04-21 09:58:22.49000984 +0000 UTC m=+7.363759721 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-bq2gb" (UniqueName: "kubernetes.io/projected/5e25259c-3396-400d-9f42-43d09b420ebb-kube-api-access-bq2gb") pod "kube-proxy-pbjkp" (UID: "5e25259c-3396-400d-9f42-43d09b420ebb") : configmap "kube-root-ca.crt" not found Apr 21 09:58:22.476547 systemd[1]: Created slice kubepods-besteffort-podc4f86886_ecf9_4f92_8853_038610bdcce9.slice - libcontainer container kubepods-besteffort-podc4f86886_ecf9_4f92_8853_038610bdcce9.slice. Apr 21 09:58:22.580052 kubelet[2514]: I0421 09:58:22.579920 2514 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6k8jg\" (UniqueName: \"kubernetes.io/projected/c4f86886-ecf9-4f92-8853-038610bdcce9-kube-api-access-6k8jg\") pod \"tigera-operator-6bf85f8dd-gcpjn\" (UID: \"c4f86886-ecf9-4f92-8853-038610bdcce9\") " pod="tigera-operator/tigera-operator-6bf85f8dd-gcpjn" Apr 21 09:58:22.580052 kubelet[2514]: I0421 09:58:22.580009 2514 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/c4f86886-ecf9-4f92-8853-038610bdcce9-var-lib-calico\") pod \"tigera-operator-6bf85f8dd-gcpjn\" (UID: \"c4f86886-ecf9-4f92-8853-038610bdcce9\") " pod="tigera-operator/tigera-operator-6bf85f8dd-gcpjn" Apr 21 09:58:22.725432 containerd[1482]: time="2026-04-21T09:58:22.725339200Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-pbjkp,Uid:5e25259c-3396-400d-9f42-43d09b420ebb,Namespace:kube-system,Attempt:0,}" Apr 21 09:58:22.752966 containerd[1482]: time="2026-04-21T09:58:22.752303560Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 09:58:22.752966 containerd[1482]: time="2026-04-21T09:58:22.752369480Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 09:58:22.752966 containerd[1482]: time="2026-04-21T09:58:22.752385600Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 09:58:22.753136 containerd[1482]: time="2026-04-21T09:58:22.752490200Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 09:58:22.777028 systemd[1]: Started cri-containerd-c0c210b5a214ca0ba48e7b4fef2a86e7cf6b22aae8ff7e5ded01eb24000232ef.scope - libcontainer container c0c210b5a214ca0ba48e7b4fef2a86e7cf6b22aae8ff7e5ded01eb24000232ef. Apr 21 09:58:22.781021 containerd[1482]: time="2026-04-21T09:58:22.780947760Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6bf85f8dd-gcpjn,Uid:c4f86886-ecf9-4f92-8853-038610bdcce9,Namespace:tigera-operator,Attempt:0,}" Apr 21 09:58:22.805152 containerd[1482]: time="2026-04-21T09:58:22.804940880Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-pbjkp,Uid:5e25259c-3396-400d-9f42-43d09b420ebb,Namespace:kube-system,Attempt:0,} returns sandbox id \"c0c210b5a214ca0ba48e7b4fef2a86e7cf6b22aae8ff7e5ded01eb24000232ef\"" Apr 21 09:58:22.814555 containerd[1482]: time="2026-04-21T09:58:22.814488880Z" level=info msg="CreateContainer within sandbox \"c0c210b5a214ca0ba48e7b4fef2a86e7cf6b22aae8ff7e5ded01eb24000232ef\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Apr 21 09:58:22.814896 containerd[1482]: time="2026-04-21T09:58:22.814158560Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 09:58:22.814896 containerd[1482]: time="2026-04-21T09:58:22.814213960Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 09:58:22.814896 containerd[1482]: time="2026-04-21T09:58:22.814225000Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 09:58:22.814896 containerd[1482]: time="2026-04-21T09:58:22.814313520Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 09:58:22.833045 systemd[1]: Started cri-containerd-10d720dd9bf5bd7561fdcad1e947a9143cd7eef9a47cf2e0823d39847e98f869.scope - libcontainer container 10d720dd9bf5bd7561fdcad1e947a9143cd7eef9a47cf2e0823d39847e98f869. Apr 21 09:58:22.833690 containerd[1482]: time="2026-04-21T09:58:22.833615960Z" level=info msg="CreateContainer within sandbox \"c0c210b5a214ca0ba48e7b4fef2a86e7cf6b22aae8ff7e5ded01eb24000232ef\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"92b8f6b0f27c8a3808ba673c4630c63d80e005640e92cab5348f7355c9b7f2b4\"" Apr 21 09:58:22.836072 containerd[1482]: time="2026-04-21T09:58:22.836042240Z" level=info msg="StartContainer for \"92b8f6b0f27c8a3808ba673c4630c63d80e005640e92cab5348f7355c9b7f2b4\"" Apr 21 09:58:22.871988 systemd[1]: Started cri-containerd-92b8f6b0f27c8a3808ba673c4630c63d80e005640e92cab5348f7355c9b7f2b4.scope - libcontainer container 92b8f6b0f27c8a3808ba673c4630c63d80e005640e92cab5348f7355c9b7f2b4. Apr 21 09:58:22.880909 containerd[1482]: time="2026-04-21T09:58:22.880810120Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6bf85f8dd-gcpjn,Uid:c4f86886-ecf9-4f92-8853-038610bdcce9,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"10d720dd9bf5bd7561fdcad1e947a9143cd7eef9a47cf2e0823d39847e98f869\"" Apr 21 09:58:22.885624 containerd[1482]: time="2026-04-21T09:58:22.885361680Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\"" Apr 21 09:58:22.908049 containerd[1482]: time="2026-04-21T09:58:22.907961080Z" level=info msg="StartContainer for \"92b8f6b0f27c8a3808ba673c4630c63d80e005640e92cab5348f7355c9b7f2b4\" returns successfully" Apr 21 09:58:23.320700 kubelet[2514]: I0421 09:58:23.320487 2514 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-pbjkp" podStartSLOduration=2.3204724 podStartE2EDuration="2.3204724s" podCreationTimestamp="2026-04-21 09:58:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-21 09:58:23.32032216 +0000 UTC m=+8.194072081" watchObservedRunningTime="2026-04-21 09:58:23.3204724 +0000 UTC m=+8.194222281" Apr 21 09:58:24.486608 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount916840473.mount: Deactivated successfully. Apr 21 09:58:24.731236 systemd-timesyncd[1338]: Contacted time server 144.76.76.107:123 (2.flatcar.pool.ntp.org). Apr 21 09:58:24.731811 systemd-timesyncd[1338]: Initial clock synchronization to Tue 2026-04-21 09:58:24.968098 UTC. Apr 21 09:58:24.877806 containerd[1482]: time="2026-04-21T09:58:24.877655640Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.40.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 09:58:24.879806 containerd[1482]: time="2026-04-21T09:58:24.879436120Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.40.7: active requests=0, bytes read=25071565" Apr 21 09:58:24.881792 containerd[1482]: time="2026-04-21T09:58:24.881280640Z" level=info msg="ImageCreate event name:\"sha256:b2fef69c2456aa0a6f6dcb63425a69d11dc35a73b1883b250e4d92f5a697fefe\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 09:58:24.885169 containerd[1482]: time="2026-04-21T09:58:24.885115160Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 09:58:24.886246 containerd[1482]: time="2026-04-21T09:58:24.886082520Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.40.7\" with image id \"sha256:b2fef69c2456aa0a6f6dcb63425a69d11dc35a73b1883b250e4d92f5a697fefe\", repo tag \"quay.io/tigera/operator:v1.40.7\", repo digest \"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\", size \"25067560\" in 2.00068184s" Apr 21 09:58:24.886246 containerd[1482]: time="2026-04-21T09:58:24.886120040Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\" returns image reference \"sha256:b2fef69c2456aa0a6f6dcb63425a69d11dc35a73b1883b250e4d92f5a697fefe\"" Apr 21 09:58:24.892072 containerd[1482]: time="2026-04-21T09:58:24.891866400Z" level=info msg="CreateContainer within sandbox \"10d720dd9bf5bd7561fdcad1e947a9143cd7eef9a47cf2e0823d39847e98f869\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Apr 21 09:58:24.916942 containerd[1482]: time="2026-04-21T09:58:24.916799520Z" level=info msg="CreateContainer within sandbox \"10d720dd9bf5bd7561fdcad1e947a9143cd7eef9a47cf2e0823d39847e98f869\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"8337d669e0737cbd41937f55c74fe1c4eef4dcf29270ff9e9e1bc16b5af05357\"" Apr 21 09:58:24.917869 containerd[1482]: time="2026-04-21T09:58:24.917589520Z" level=info msg="StartContainer for \"8337d669e0737cbd41937f55c74fe1c4eef4dcf29270ff9e9e1bc16b5af05357\"" Apr 21 09:58:24.951342 systemd[1]: Started cri-containerd-8337d669e0737cbd41937f55c74fe1c4eef4dcf29270ff9e9e1bc16b5af05357.scope - libcontainer container 8337d669e0737cbd41937f55c74fe1c4eef4dcf29270ff9e9e1bc16b5af05357. Apr 21 09:58:24.986211 containerd[1482]: time="2026-04-21T09:58:24.986103520Z" level=info msg="StartContainer for \"8337d669e0737cbd41937f55c74fe1c4eef4dcf29270ff9e9e1bc16b5af05357\" returns successfully" Apr 21 09:58:25.628513 kubelet[2514]: I0421 09:58:25.628411 2514 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-6bf85f8dd-gcpjn" podStartSLOduration=1.626034583 podStartE2EDuration="3.628387143s" podCreationTimestamp="2026-04-21 09:58:22 +0000 UTC" firstStartedPulling="2026-04-21 09:58:22.88464836 +0000 UTC m=+7.758398241" lastFinishedPulling="2026-04-21 09:58:24.88700092 +0000 UTC m=+9.760750801" observedRunningTime="2026-04-21 09:58:25.327968637 +0000 UTC m=+10.201718561" watchObservedRunningTime="2026-04-21 09:58:25.628387143 +0000 UTC m=+10.502137066" Apr 21 09:58:29.254295 sudo[1665]: pam_unix(sudo:session): session closed for user root Apr 21 09:58:29.272861 sshd[1662]: pam_unix(sshd:session): session closed for user core Apr 21 09:58:29.277233 systemd[1]: sshd@6-178.104.221.144:22-50.85.169.122:35168.service: Deactivated successfully. Apr 21 09:58:29.277601 systemd-logind[1452]: Session 7 logged out. Waiting for processes to exit. Apr 21 09:58:29.280579 systemd[1]: session-7.scope: Deactivated successfully. Apr 21 09:58:29.280951 systemd[1]: session-7.scope: Consumed 7.022s CPU time, 153.2M memory peak, 0B memory swap peak. Apr 21 09:58:29.284199 systemd-logind[1452]: Removed session 7. Apr 21 09:58:31.911924 update_engine[1453]: I20260421 09:58:31.911856 1453 update_attempter.cc:509] Updating boot flags... Apr 21 09:58:31.984931 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 32 scanned by (udev-worker) (2913) Apr 21 09:58:32.088839 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 32 scanned by (udev-worker) (2917) Apr 21 09:58:37.499231 systemd[1]: Created slice kubepods-besteffort-pod42093e54_b253_442c_acca_b4a8b237c5ad.slice - libcontainer container kubepods-besteffort-pod42093e54_b253_442c_acca_b4a8b237c5ad.slice. Apr 21 09:58:37.584919 kubelet[2514]: I0421 09:58:37.584613 2514 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9r6jg\" (UniqueName: \"kubernetes.io/projected/42093e54-b253-442c-acca-b4a8b237c5ad-kube-api-access-9r6jg\") pod \"calico-typha-5bcfbf5855-rkjkv\" (UID: \"42093e54-b253-442c-acca-b4a8b237c5ad\") " pod="calico-system/calico-typha-5bcfbf5855-rkjkv" Apr 21 09:58:37.584919 kubelet[2514]: I0421 09:58:37.584700 2514 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/42093e54-b253-442c-acca-b4a8b237c5ad-typha-certs\") pod \"calico-typha-5bcfbf5855-rkjkv\" (UID: \"42093e54-b253-442c-acca-b4a8b237c5ad\") " pod="calico-system/calico-typha-5bcfbf5855-rkjkv" Apr 21 09:58:37.584919 kubelet[2514]: I0421 09:58:37.584761 2514 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/42093e54-b253-442c-acca-b4a8b237c5ad-tigera-ca-bundle\") pod \"calico-typha-5bcfbf5855-rkjkv\" (UID: \"42093e54-b253-442c-acca-b4a8b237c5ad\") " pod="calico-system/calico-typha-5bcfbf5855-rkjkv" Apr 21 09:58:37.610132 systemd[1]: Created slice kubepods-besteffort-podbdbc3232_e15c_4d76_a4a3_84271f991f3c.slice - libcontainer container kubepods-besteffort-podbdbc3232_e15c_4d76_a4a3_84271f991f3c.slice. Apr 21 09:58:37.685864 kubelet[2514]: I0421 09:58:37.685351 2514 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/bdbc3232-e15c-4d76-a4a3-84271f991f3c-node-certs\") pod \"calico-node-qgdvb\" (UID: \"bdbc3232-e15c-4d76-a4a3-84271f991f3c\") " pod="calico-system/calico-node-qgdvb" Apr 21 09:58:37.685864 kubelet[2514]: I0421 09:58:37.685397 2514 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bdbc3232-e15c-4d76-a4a3-84271f991f3c-lib-modules\") pod \"calico-node-qgdvb\" (UID: \"bdbc3232-e15c-4d76-a4a3-84271f991f3c\") " pod="calico-system/calico-node-qgdvb" Apr 21 09:58:37.685864 kubelet[2514]: I0421 09:58:37.685413 2514 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/bdbc3232-e15c-4d76-a4a3-84271f991f3c-sys-fs\") pod \"calico-node-qgdvb\" (UID: \"bdbc3232-e15c-4d76-a4a3-84271f991f3c\") " pod="calico-system/calico-node-qgdvb" Apr 21 09:58:37.685864 kubelet[2514]: I0421 09:58:37.685429 2514 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/bdbc3232-e15c-4d76-a4a3-84271f991f3c-cni-bin-dir\") pod \"calico-node-qgdvb\" (UID: \"bdbc3232-e15c-4d76-a4a3-84271f991f3c\") " pod="calico-system/calico-node-qgdvb" Apr 21 09:58:37.685864 kubelet[2514]: I0421 09:58:37.685444 2514 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nodeproc\" (UniqueName: \"kubernetes.io/host-path/bdbc3232-e15c-4d76-a4a3-84271f991f3c-nodeproc\") pod \"calico-node-qgdvb\" (UID: \"bdbc3232-e15c-4d76-a4a3-84271f991f3c\") " pod="calico-system/calico-node-qgdvb" Apr 21 09:58:37.686130 kubelet[2514]: I0421 09:58:37.685459 2514 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-795rf\" (UniqueName: \"kubernetes.io/projected/bdbc3232-e15c-4d76-a4a3-84271f991f3c-kube-api-access-795rf\") pod \"calico-node-qgdvb\" (UID: \"bdbc3232-e15c-4d76-a4a3-84271f991f3c\") " pod="calico-system/calico-node-qgdvb" Apr 21 09:58:37.686130 kubelet[2514]: I0421 09:58:37.685476 2514 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/bdbc3232-e15c-4d76-a4a3-84271f991f3c-policysync\") pod \"calico-node-qgdvb\" (UID: \"bdbc3232-e15c-4d76-a4a3-84271f991f3c\") " pod="calico-system/calico-node-qgdvb" Apr 21 09:58:37.686130 kubelet[2514]: I0421 09:58:37.685494 2514 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bdbc3232-e15c-4d76-a4a3-84271f991f3c-tigera-ca-bundle\") pod \"calico-node-qgdvb\" (UID: \"bdbc3232-e15c-4d76-a4a3-84271f991f3c\") " pod="calico-system/calico-node-qgdvb" Apr 21 09:58:37.686130 kubelet[2514]: I0421 09:58:37.685511 2514 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/bdbc3232-e15c-4d76-a4a3-84271f991f3c-bpffs\") pod \"calico-node-qgdvb\" (UID: \"bdbc3232-e15c-4d76-a4a3-84271f991f3c\") " pod="calico-system/calico-node-qgdvb" Apr 21 09:58:37.686130 kubelet[2514]: I0421 09:58:37.685525 2514 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/bdbc3232-e15c-4d76-a4a3-84271f991f3c-cni-net-dir\") pod \"calico-node-qgdvb\" (UID: \"bdbc3232-e15c-4d76-a4a3-84271f991f3c\") " pod="calico-system/calico-node-qgdvb" Apr 21 09:58:37.686429 kubelet[2514]: I0421 09:58:37.685538 2514 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bdbc3232-e15c-4d76-a4a3-84271f991f3c-xtables-lock\") pod \"calico-node-qgdvb\" (UID: \"bdbc3232-e15c-4d76-a4a3-84271f991f3c\") " pod="calico-system/calico-node-qgdvb" Apr 21 09:58:37.686429 kubelet[2514]: I0421 09:58:37.685561 2514 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/bdbc3232-e15c-4d76-a4a3-84271f991f3c-var-run-calico\") pod \"calico-node-qgdvb\" (UID: \"bdbc3232-e15c-4d76-a4a3-84271f991f3c\") " pod="calico-system/calico-node-qgdvb" Apr 21 09:58:37.686429 kubelet[2514]: I0421 09:58:37.685576 2514 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/bdbc3232-e15c-4d76-a4a3-84271f991f3c-flexvol-driver-host\") pod \"calico-node-qgdvb\" (UID: \"bdbc3232-e15c-4d76-a4a3-84271f991f3c\") " pod="calico-system/calico-node-qgdvb" Apr 21 09:58:37.686429 kubelet[2514]: I0421 09:58:37.685604 2514 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/bdbc3232-e15c-4d76-a4a3-84271f991f3c-cni-log-dir\") pod \"calico-node-qgdvb\" (UID: \"bdbc3232-e15c-4d76-a4a3-84271f991f3c\") " pod="calico-system/calico-node-qgdvb" Apr 21 09:58:37.686429 kubelet[2514]: I0421 09:58:37.685619 2514 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/bdbc3232-e15c-4d76-a4a3-84271f991f3c-var-lib-calico\") pod \"calico-node-qgdvb\" (UID: \"bdbc3232-e15c-4d76-a4a3-84271f991f3c\") " pod="calico-system/calico-node-qgdvb" Apr 21 09:58:37.732622 kubelet[2514]: E0421 09:58:37.732228 2514 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lnmtt" podUID="8e8c68fe-3eb8-447c-8335-6bfebeaa92f8" Apr 21 09:58:37.786563 kubelet[2514]: I0421 09:58:37.786215 2514 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8e8c68fe-3eb8-447c-8335-6bfebeaa92f8-kubelet-dir\") pod \"csi-node-driver-lnmtt\" (UID: \"8e8c68fe-3eb8-447c-8335-6bfebeaa92f8\") " pod="calico-system/csi-node-driver-lnmtt" Apr 21 09:58:37.786563 kubelet[2514]: I0421 09:58:37.786258 2514 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dqnhd\" (UniqueName: \"kubernetes.io/projected/8e8c68fe-3eb8-447c-8335-6bfebeaa92f8-kube-api-access-dqnhd\") pod \"csi-node-driver-lnmtt\" (UID: \"8e8c68fe-3eb8-447c-8335-6bfebeaa92f8\") " pod="calico-system/csi-node-driver-lnmtt" Apr 21 09:58:37.786563 kubelet[2514]: I0421 09:58:37.786305 2514 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/8e8c68fe-3eb8-447c-8335-6bfebeaa92f8-socket-dir\") pod \"csi-node-driver-lnmtt\" (UID: \"8e8c68fe-3eb8-447c-8335-6bfebeaa92f8\") " pod="calico-system/csi-node-driver-lnmtt" Apr 21 09:58:37.786563 kubelet[2514]: I0421 09:58:37.786362 2514 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/8e8c68fe-3eb8-447c-8335-6bfebeaa92f8-varrun\") pod \"csi-node-driver-lnmtt\" (UID: \"8e8c68fe-3eb8-447c-8335-6bfebeaa92f8\") " pod="calico-system/csi-node-driver-lnmtt" Apr 21 09:58:37.788845 kubelet[2514]: I0421 09:58:37.788381 2514 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/8e8c68fe-3eb8-447c-8335-6bfebeaa92f8-registration-dir\") pod \"csi-node-driver-lnmtt\" (UID: \"8e8c68fe-3eb8-447c-8335-6bfebeaa92f8\") " pod="calico-system/csi-node-driver-lnmtt" Apr 21 09:58:37.794008 kubelet[2514]: E0421 09:58:37.793976 2514 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 09:58:37.794008 kubelet[2514]: W0421 09:58:37.794001 2514 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 09:58:37.794454 kubelet[2514]: E0421 09:58:37.794036 2514 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 09:58:37.794454 kubelet[2514]: E0421 09:58:37.794283 2514 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 09:58:37.794454 kubelet[2514]: W0421 09:58:37.794293 2514 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 09:58:37.794454 kubelet[2514]: E0421 09:58:37.794303 2514 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 09:58:37.798027 kubelet[2514]: E0421 09:58:37.798000 2514 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 09:58:37.798027 kubelet[2514]: W0421 09:58:37.798021 2514 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 09:58:37.798935 kubelet[2514]: E0421 09:58:37.798042 2514 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 09:58:37.798935 kubelet[2514]: E0421 09:58:37.798818 2514 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 09:58:37.798935 kubelet[2514]: W0421 09:58:37.798859 2514 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 09:58:37.798935 kubelet[2514]: E0421 09:58:37.798871 2514 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 09:58:37.799624 kubelet[2514]: E0421 09:58:37.799602 2514 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 09:58:37.799624 kubelet[2514]: W0421 09:58:37.799620 2514 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 09:58:37.799786 kubelet[2514]: E0421 09:58:37.799633 2514 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 09:58:37.802949 kubelet[2514]: E0421 09:58:37.802923 2514 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 09:58:37.802949 kubelet[2514]: W0421 09:58:37.802945 2514 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 09:58:37.802949 kubelet[2514]: E0421 09:58:37.802961 2514 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 09:58:37.803679 kubelet[2514]: E0421 09:58:37.803663 2514 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 09:58:37.803679 kubelet[2514]: W0421 09:58:37.803676 2514 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 09:58:37.803925 kubelet[2514]: E0421 09:58:37.803686 2514 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 09:58:37.803956 kubelet[2514]: E0421 09:58:37.803944 2514 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 09:58:37.804024 kubelet[2514]: W0421 09:58:37.803955 2514 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 09:58:37.804024 kubelet[2514]: E0421 09:58:37.804019 2514 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 09:58:37.804268 kubelet[2514]: E0421 09:58:37.804253 2514 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 09:58:37.804268 kubelet[2514]: W0421 09:58:37.804266 2514 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 09:58:37.804659 kubelet[2514]: E0421 09:58:37.804275 2514 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 09:58:37.804659 kubelet[2514]: E0421 09:58:37.804462 2514 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 09:58:37.804659 kubelet[2514]: W0421 09:58:37.804471 2514 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 09:58:37.804659 kubelet[2514]: E0421 09:58:37.804479 2514 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 09:58:37.804659 kubelet[2514]: E0421 09:58:37.804638 2514 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 09:58:37.804659 kubelet[2514]: W0421 09:58:37.804646 2514 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 09:58:37.804659 kubelet[2514]: E0421 09:58:37.804656 2514 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 09:58:37.804930 kubelet[2514]: E0421 09:58:37.804840 2514 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 09:58:37.804930 kubelet[2514]: W0421 09:58:37.804849 2514 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 09:58:37.804930 kubelet[2514]: E0421 09:58:37.804858 2514 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 09:58:37.805194 kubelet[2514]: E0421 09:58:37.805131 2514 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 09:58:37.805194 kubelet[2514]: W0421 09:58:37.805147 2514 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 09:58:37.805194 kubelet[2514]: E0421 09:58:37.805157 2514 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 09:58:37.806637 kubelet[2514]: E0421 09:58:37.805504 2514 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 09:58:37.806637 kubelet[2514]: W0421 09:58:37.805512 2514 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 09:58:37.806637 kubelet[2514]: E0421 09:58:37.805520 2514 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 09:58:37.806637 kubelet[2514]: E0421 09:58:37.805744 2514 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 09:58:37.806637 kubelet[2514]: W0421 09:58:37.805760 2514 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 09:58:37.806637 kubelet[2514]: E0421 09:58:37.805774 2514 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 09:58:37.807576 kubelet[2514]: E0421 09:58:37.807554 2514 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 09:58:37.807653 kubelet[2514]: W0421 09:58:37.807630 2514 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 09:58:37.808021 kubelet[2514]: E0421 09:58:37.808003 2514 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 09:58:37.808843 kubelet[2514]: E0421 09:58:37.808563 2514 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 09:58:37.808968 kubelet[2514]: W0421 09:58:37.808950 2514 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 09:58:37.810354 kubelet[2514]: E0421 09:58:37.810329 2514 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 09:58:37.812308 kubelet[2514]: E0421 09:58:37.812160 2514 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 09:58:37.812308 kubelet[2514]: W0421 09:58:37.812183 2514 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 09:58:37.812308 kubelet[2514]: E0421 09:58:37.812200 2514 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 09:58:37.812693 kubelet[2514]: E0421 09:58:37.812458 2514 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 09:58:37.812693 kubelet[2514]: W0421 09:58:37.812470 2514 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 09:58:37.812693 kubelet[2514]: E0421 09:58:37.812480 2514 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 09:58:37.812855 kubelet[2514]: E0421 09:58:37.812841 2514 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 09:58:37.812960 kubelet[2514]: W0421 09:58:37.812905 2514 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 09:58:37.813019 kubelet[2514]: E0421 09:58:37.813009 2514 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 09:58:37.814198 kubelet[2514]: E0421 09:58:37.814181 2514 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 09:58:37.814285 kubelet[2514]: W0421 09:58:37.814272 2514 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 09:58:37.814849 kubelet[2514]: E0421 09:58:37.814351 2514 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 09:58:37.816856 kubelet[2514]: E0421 09:58:37.816164 2514 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 09:58:37.816856 kubelet[2514]: W0421 09:58:37.816179 2514 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 09:58:37.816856 kubelet[2514]: E0421 09:58:37.816191 2514 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 09:58:37.817666 kubelet[2514]: E0421 09:58:37.817506 2514 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 09:58:37.817666 kubelet[2514]: W0421 09:58:37.817518 2514 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 09:58:37.817666 kubelet[2514]: E0421 09:58:37.817529 2514 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 09:58:37.820426 containerd[1482]: time="2026-04-21T09:58:37.819748865Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5bcfbf5855-rkjkv,Uid:42093e54-b253-442c-acca-b4a8b237c5ad,Namespace:calico-system,Attempt:0,}" Apr 21 09:58:37.861901 containerd[1482]: time="2026-04-21T09:58:37.855683073Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 09:58:37.861901 containerd[1482]: time="2026-04-21T09:58:37.855746972Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 09:58:37.861901 containerd[1482]: time="2026-04-21T09:58:37.855757877Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 09:58:37.861901 containerd[1482]: time="2026-04-21T09:58:37.855859560Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 09:58:37.889866 kubelet[2514]: E0421 09:58:37.889814 2514 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 09:58:37.890106 kubelet[2514]: W0421 09:58:37.890005 2514 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 09:58:37.890106 kubelet[2514]: E0421 09:58:37.890030 2514 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 09:58:37.890343 kubelet[2514]: E0421 09:58:37.890331 2514 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 09:58:37.890520 kubelet[2514]: W0421 09:58:37.890455 2514 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 09:58:37.890520 kubelet[2514]: E0421 09:58:37.890474 2514 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 09:58:37.890813 kubelet[2514]: E0421 09:58:37.890791 2514 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 09:58:37.891369 systemd[1]: Started cri-containerd-0066fcc98b569692bfe675929ec221e92b4e9b74ca08a8538f5f2a290073c931.scope - libcontainer container 0066fcc98b569692bfe675929ec221e92b4e9b74ca08a8538f5f2a290073c931. Apr 21 09:58:37.891677 kubelet[2514]: W0421 09:58:37.890809 2514 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 09:58:37.891720 kubelet[2514]: E0421 09:58:37.891687 2514 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 09:58:37.892608 kubelet[2514]: E0421 09:58:37.892583 2514 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 09:58:37.892608 kubelet[2514]: W0421 09:58:37.892599 2514 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 09:58:37.892608 kubelet[2514]: E0421 09:58:37.892610 2514 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 09:58:37.893036 kubelet[2514]: E0421 09:58:37.893018 2514 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 09:58:37.893036 kubelet[2514]: W0421 09:58:37.893034 2514 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 09:58:37.893181 kubelet[2514]: E0421 09:58:37.893046 2514 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 09:58:37.893451 kubelet[2514]: E0421 09:58:37.893426 2514 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 09:58:37.893451 kubelet[2514]: W0421 09:58:37.893438 2514 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 09:58:37.893627 kubelet[2514]: E0421 09:58:37.893542 2514 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 09:58:37.893956 kubelet[2514]: E0421 09:58:37.893888 2514 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 09:58:37.893956 kubelet[2514]: W0421 09:58:37.893922 2514 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 09:58:37.893956 kubelet[2514]: E0421 09:58:37.893934 2514 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 09:58:37.894811 kubelet[2514]: E0421 09:58:37.894789 2514 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 09:58:37.894933 kubelet[2514]: W0421 09:58:37.894861 2514 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 09:58:37.894933 kubelet[2514]: E0421 09:58:37.894880 2514 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 09:58:37.895160 kubelet[2514]: E0421 09:58:37.895147 2514 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 09:58:37.895160 kubelet[2514]: W0421 09:58:37.895159 2514 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 09:58:37.895214 kubelet[2514]: E0421 09:58:37.895171 2514 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 09:58:37.895386 kubelet[2514]: E0421 09:58:37.895370 2514 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 09:58:37.895386 kubelet[2514]: W0421 09:58:37.895382 2514 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 09:58:37.895581 kubelet[2514]: E0421 09:58:37.895391 2514 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 09:58:37.895931 kubelet[2514]: E0421 09:58:37.895911 2514 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 09:58:37.895931 kubelet[2514]: W0421 09:58:37.895929 2514 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 09:58:37.896177 kubelet[2514]: E0421 09:58:37.895941 2514 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 09:58:37.896628 kubelet[2514]: E0421 09:58:37.896250 2514 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 09:58:37.896628 kubelet[2514]: W0421 09:58:37.896265 2514 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 09:58:37.896628 kubelet[2514]: E0421 09:58:37.896276 2514 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 09:58:37.896628 kubelet[2514]: E0421 09:58:37.896534 2514 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 09:58:37.896628 kubelet[2514]: W0421 09:58:37.896547 2514 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 09:58:37.896628 kubelet[2514]: E0421 09:58:37.896559 2514 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 09:58:37.897044 kubelet[2514]: E0421 09:58:37.897025 2514 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 09:58:37.897044 kubelet[2514]: W0421 09:58:37.897051 2514 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 09:58:37.897044 kubelet[2514]: E0421 09:58:37.897064 2514 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 09:58:37.897483 kubelet[2514]: E0421 09:58:37.897249 2514 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 09:58:37.897483 kubelet[2514]: W0421 09:58:37.897257 2514 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 09:58:37.897483 kubelet[2514]: E0421 09:58:37.897267 2514 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 09:58:37.897944 kubelet[2514]: E0421 09:58:37.897921 2514 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 09:58:37.897944 kubelet[2514]: W0421 09:58:37.897937 2514 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 09:58:37.898777 kubelet[2514]: E0421 09:58:37.897963 2514 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 09:58:37.898777 kubelet[2514]: E0421 09:58:37.898185 2514 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 09:58:37.898777 kubelet[2514]: W0421 09:58:37.898194 2514 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 09:58:37.898777 kubelet[2514]: E0421 09:58:37.898203 2514 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 09:58:37.898777 kubelet[2514]: E0421 09:58:37.898364 2514 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 09:58:37.898777 kubelet[2514]: W0421 09:58:37.898372 2514 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 09:58:37.898777 kubelet[2514]: E0421 09:58:37.898392 2514 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 09:58:37.898777 kubelet[2514]: E0421 09:58:37.898714 2514 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 09:58:37.898777 kubelet[2514]: W0421 09:58:37.898724 2514 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 09:58:37.898777 kubelet[2514]: E0421 09:58:37.898736 2514 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 09:58:37.899944 kubelet[2514]: E0421 09:58:37.899462 2514 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 09:58:37.899944 kubelet[2514]: W0421 09:58:37.899479 2514 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 09:58:37.899944 kubelet[2514]: E0421 09:58:37.899493 2514 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 09:58:37.900415 kubelet[2514]: E0421 09:58:37.900373 2514 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 09:58:37.900415 kubelet[2514]: W0421 09:58:37.900388 2514 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 09:58:37.900415 kubelet[2514]: E0421 09:58:37.900400 2514 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 09:58:37.902247 kubelet[2514]: E0421 09:58:37.902108 2514 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 09:58:37.902247 kubelet[2514]: W0421 09:58:37.902126 2514 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 09:58:37.902247 kubelet[2514]: E0421 09:58:37.902139 2514 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 09:58:37.902563 kubelet[2514]: E0421 09:58:37.902488 2514 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 09:58:37.902563 kubelet[2514]: W0421 09:58:37.902503 2514 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 09:58:37.902563 kubelet[2514]: E0421 09:58:37.902513 2514 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 09:58:37.903219 kubelet[2514]: E0421 09:58:37.902972 2514 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 09:58:37.903219 kubelet[2514]: W0421 09:58:37.902986 2514 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 09:58:37.903219 kubelet[2514]: E0421 09:58:37.902998 2514 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 09:58:37.903848 kubelet[2514]: E0421 09:58:37.903769 2514 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 09:58:37.903848 kubelet[2514]: W0421 09:58:37.903783 2514 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 09:58:37.903848 kubelet[2514]: E0421 09:58:37.903796 2514 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 09:58:37.913736 kubelet[2514]: E0421 09:58:37.913659 2514 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 09:58:37.913736 kubelet[2514]: W0421 09:58:37.913683 2514 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 09:58:37.913736 kubelet[2514]: E0421 09:58:37.913701 2514 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 09:58:37.913937 containerd[1482]: time="2026-04-21T09:58:37.913804687Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-qgdvb,Uid:bdbc3232-e15c-4d76-a4a3-84271f991f3c,Namespace:calico-system,Attempt:0,}" Apr 21 09:58:37.940962 containerd[1482]: time="2026-04-21T09:58:37.940532430Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5bcfbf5855-rkjkv,Uid:42093e54-b253-442c-acca-b4a8b237c5ad,Namespace:calico-system,Attempt:0,} returns sandbox id \"0066fcc98b569692bfe675929ec221e92b4e9b74ca08a8538f5f2a290073c931\"" Apr 21 09:58:37.942895 containerd[1482]: time="2026-04-21T09:58:37.942589677Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\"" Apr 21 09:58:37.949244 containerd[1482]: time="2026-04-21T09:58:37.948806523Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 09:58:37.949653 containerd[1482]: time="2026-04-21T09:58:37.949305323Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 09:58:37.949653 containerd[1482]: time="2026-04-21T09:58:37.949354333Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 09:58:37.949653 containerd[1482]: time="2026-04-21T09:58:37.949461530Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 09:58:37.968707 systemd[1]: Started cri-containerd-fb1d76d34a95828f68da2fd659beb366e9567f3bd500fdae303a570d0185df94.scope - libcontainer container fb1d76d34a95828f68da2fd659beb366e9567f3bd500fdae303a570d0185df94. Apr 21 09:58:37.997851 containerd[1482]: time="2026-04-21T09:58:37.997680709Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-qgdvb,Uid:bdbc3232-e15c-4d76-a4a3-84271f991f3c,Namespace:calico-system,Attempt:0,} returns sandbox id \"fb1d76d34a95828f68da2fd659beb366e9567f3bd500fdae303a570d0185df94\"" Apr 21 09:58:39.232752 kubelet[2514]: E0421 09:58:39.232413 2514 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lnmtt" podUID="8e8c68fe-3eb8-447c-8335-6bfebeaa92f8" Apr 21 09:58:39.563366 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1583101071.mount: Deactivated successfully. Apr 21 09:58:39.964907 containerd[1482]: time="2026-04-21T09:58:39.964392795Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 09:58:39.966075 containerd[1482]: time="2026-04-21T09:58:39.966016863Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.31.4: active requests=0, bytes read=33865174" Apr 21 09:58:39.967937 containerd[1482]: time="2026-04-21T09:58:39.966577774Z" level=info msg="ImageCreate event name:\"sha256:e836e1dea560d4c477b347f1c93c245aec618361306b23eda1d6bb7665476182\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 09:58:39.969556 containerd[1482]: time="2026-04-21T09:58:39.969491911Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 09:58:39.970287 containerd[1482]: time="2026-04-21T09:58:39.970140662Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.31.4\" with image id \"sha256:e836e1dea560d4c477b347f1c93c245aec618361306b23eda1d6bb7665476182\", repo tag \"ghcr.io/flatcar/calico/typha:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\", size \"33865028\" in 2.027294357s" Apr 21 09:58:39.970287 containerd[1482]: time="2026-04-21T09:58:39.970172647Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\" returns image reference \"sha256:e836e1dea560d4c477b347f1c93c245aec618361306b23eda1d6bb7665476182\"" Apr 21 09:58:39.972100 containerd[1482]: time="2026-04-21T09:58:39.971926023Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\"" Apr 21 09:58:39.987614 containerd[1482]: time="2026-04-21T09:58:39.987574730Z" level=info msg="CreateContainer within sandbox \"0066fcc98b569692bfe675929ec221e92b4e9b74ca08a8538f5f2a290073c931\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Apr 21 09:58:40.003513 containerd[1482]: time="2026-04-21T09:58:40.003421904Z" level=info msg="CreateContainer within sandbox \"0066fcc98b569692bfe675929ec221e92b4e9b74ca08a8538f5f2a290073c931\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"3145a30b6e721bd7cd2348b79ced87541e927a5f77cc45f8ab3062637d4d3cee\"" Apr 21 09:58:40.005054 containerd[1482]: time="2026-04-21T09:58:40.004102334Z" level=info msg="StartContainer for \"3145a30b6e721bd7cd2348b79ced87541e927a5f77cc45f8ab3062637d4d3cee\"" Apr 21 09:58:40.043114 systemd[1]: Started cri-containerd-3145a30b6e721bd7cd2348b79ced87541e927a5f77cc45f8ab3062637d4d3cee.scope - libcontainer container 3145a30b6e721bd7cd2348b79ced87541e927a5f77cc45f8ab3062637d4d3cee. Apr 21 09:58:40.085163 containerd[1482]: time="2026-04-21T09:58:40.085029639Z" level=info msg="StartContainer for \"3145a30b6e721bd7cd2348b79ced87541e927a5f77cc45f8ab3062637d4d3cee\" returns successfully" Apr 21 09:58:40.366178 kubelet[2514]: I0421 09:58:40.366026 2514 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-5bcfbf5855-rkjkv" podStartSLOduration=1.336455065 podStartE2EDuration="3.366010407s" podCreationTimestamp="2026-04-21 09:58:37 +0000 UTC" firstStartedPulling="2026-04-21 09:58:37.942215054 +0000 UTC m=+22.815964935" lastFinishedPulling="2026-04-21 09:58:39.971770395 +0000 UTC m=+24.845520277" observedRunningTime="2026-04-21 09:58:40.365769649 +0000 UTC m=+25.239519530" watchObservedRunningTime="2026-04-21 09:58:40.366010407 +0000 UTC m=+25.239760289" Apr 21 09:58:40.386149 kubelet[2514]: E0421 09:58:40.386118 2514 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 09:58:40.386149 kubelet[2514]: W0421 09:58:40.386145 2514 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 09:58:40.386346 kubelet[2514]: E0421 09:58:40.386168 2514 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 09:58:40.386383 kubelet[2514]: E0421 09:58:40.386372 2514 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 09:58:40.386409 kubelet[2514]: W0421 09:58:40.386381 2514 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 09:58:40.386433 kubelet[2514]: E0421 09:58:40.386413 2514 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 09:58:40.386665 kubelet[2514]: E0421 09:58:40.386625 2514 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 09:58:40.386665 kubelet[2514]: W0421 09:58:40.386649 2514 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 09:58:40.386665 kubelet[2514]: E0421 09:58:40.386659 2514 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 09:58:40.387022 kubelet[2514]: E0421 09:58:40.386989 2514 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 09:58:40.387022 kubelet[2514]: W0421 09:58:40.387006 2514 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 09:58:40.387022 kubelet[2514]: E0421 09:58:40.387015 2514 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 09:58:40.387966 kubelet[2514]: E0421 09:58:40.387948 2514 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 09:58:40.387966 kubelet[2514]: W0421 09:58:40.387991 2514 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 09:58:40.387966 kubelet[2514]: E0421 09:58:40.388010 2514 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 09:58:40.388463 kubelet[2514]: E0421 09:58:40.388393 2514 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 09:58:40.388463 kubelet[2514]: W0421 09:58:40.388407 2514 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 09:58:40.388463 kubelet[2514]: E0421 09:58:40.388418 2514 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 09:58:40.388860 kubelet[2514]: E0421 09:58:40.388744 2514 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 09:58:40.388860 kubelet[2514]: W0421 09:58:40.388756 2514 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 09:58:40.388860 kubelet[2514]: E0421 09:58:40.388766 2514 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 09:58:40.389094 kubelet[2514]: E0421 09:58:40.389045 2514 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 09:58:40.389094 kubelet[2514]: W0421 09:58:40.389057 2514 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 09:58:40.389094 kubelet[2514]: E0421 09:58:40.389067 2514 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 09:58:40.389494 kubelet[2514]: E0421 09:58:40.389437 2514 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 09:58:40.389494 kubelet[2514]: W0421 09:58:40.389449 2514 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 09:58:40.389494 kubelet[2514]: E0421 09:58:40.389460 2514 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 09:58:40.389874 kubelet[2514]: E0421 09:58:40.389788 2514 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 09:58:40.389874 kubelet[2514]: W0421 09:58:40.389800 2514 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 09:58:40.389874 kubelet[2514]: E0421 09:58:40.389812 2514 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 09:58:40.390200 kubelet[2514]: E0421 09:58:40.390134 2514 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 09:58:40.390200 kubelet[2514]: W0421 09:58:40.390145 2514 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 09:58:40.390200 kubelet[2514]: E0421 09:58:40.390155 2514 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 09:58:40.390704 kubelet[2514]: E0421 09:58:40.390635 2514 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 09:58:40.390704 kubelet[2514]: W0421 09:58:40.390649 2514 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 09:58:40.390704 kubelet[2514]: E0421 09:58:40.390661 2514 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 09:58:40.391196 kubelet[2514]: E0421 09:58:40.391085 2514 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 09:58:40.391196 kubelet[2514]: W0421 09:58:40.391098 2514 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 09:58:40.391196 kubelet[2514]: E0421 09:58:40.391110 2514 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 09:58:40.391528 kubelet[2514]: E0421 09:58:40.391420 2514 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 09:58:40.391528 kubelet[2514]: W0421 09:58:40.391433 2514 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 09:58:40.391528 kubelet[2514]: E0421 09:58:40.391484 2514 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 09:58:40.392224 kubelet[2514]: E0421 09:58:40.392057 2514 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 09:58:40.392224 kubelet[2514]: W0421 09:58:40.392083 2514 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 09:58:40.392224 kubelet[2514]: E0421 09:58:40.392118 2514 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 09:58:40.413211 kubelet[2514]: E0421 09:58:40.412971 2514 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 09:58:40.413211 kubelet[2514]: W0421 09:58:40.413090 2514 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 09:58:40.413211 kubelet[2514]: E0421 09:58:40.413124 2514 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 09:58:40.413572 kubelet[2514]: E0421 09:58:40.413524 2514 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 09:58:40.413572 kubelet[2514]: W0421 09:58:40.413550 2514 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 09:58:40.413572 kubelet[2514]: E0421 09:58:40.413570 2514 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 09:58:40.414038 kubelet[2514]: E0421 09:58:40.414015 2514 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 09:58:40.414038 kubelet[2514]: W0421 09:58:40.414037 2514 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 09:58:40.414038 kubelet[2514]: E0421 09:58:40.414056 2514 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 09:58:40.414546 kubelet[2514]: E0421 09:58:40.414518 2514 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 09:58:40.414546 kubelet[2514]: W0421 09:58:40.414545 2514 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 09:58:40.414669 kubelet[2514]: E0421 09:58:40.414566 2514 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 09:58:40.414975 kubelet[2514]: E0421 09:58:40.414956 2514 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 09:58:40.414975 kubelet[2514]: W0421 09:58:40.414969 2514 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 09:58:40.414975 kubelet[2514]: E0421 09:58:40.414980 2514 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 09:58:40.415256 kubelet[2514]: E0421 09:58:40.415239 2514 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 09:58:40.415256 kubelet[2514]: W0421 09:58:40.415253 2514 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 09:58:40.415374 kubelet[2514]: E0421 09:58:40.415263 2514 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 09:58:40.415516 kubelet[2514]: E0421 09:58:40.415494 2514 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 09:58:40.415516 kubelet[2514]: W0421 09:58:40.415510 2514 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 09:58:40.415675 kubelet[2514]: E0421 09:58:40.415533 2514 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 09:58:40.415792 kubelet[2514]: E0421 09:58:40.415761 2514 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 09:58:40.415792 kubelet[2514]: W0421 09:58:40.415776 2514 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 09:58:40.415792 kubelet[2514]: E0421 09:58:40.415786 2514 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 09:58:40.416119 kubelet[2514]: E0421 09:58:40.416103 2514 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 09:58:40.416180 kubelet[2514]: W0421 09:58:40.416118 2514 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 09:58:40.416217 kubelet[2514]: E0421 09:58:40.416186 2514 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 09:58:40.416539 kubelet[2514]: E0421 09:58:40.416447 2514 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 09:58:40.416539 kubelet[2514]: W0421 09:58:40.416463 2514 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 09:58:40.416539 kubelet[2514]: E0421 09:58:40.416476 2514 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 09:58:40.416793 kubelet[2514]: E0421 09:58:40.416748 2514 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 09:58:40.417007 kubelet[2514]: W0421 09:58:40.416847 2514 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 09:58:40.417007 kubelet[2514]: E0421 09:58:40.416862 2514 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 09:58:40.417133 kubelet[2514]: E0421 09:58:40.417121 2514 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 09:58:40.417185 kubelet[2514]: W0421 09:58:40.417175 2514 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 09:58:40.417242 kubelet[2514]: E0421 09:58:40.417233 2514 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 09:58:40.417630 kubelet[2514]: E0421 09:58:40.417613 2514 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 09:58:40.417630 kubelet[2514]: W0421 09:58:40.417628 2514 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 09:58:40.417720 kubelet[2514]: E0421 09:58:40.417640 2514 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 09:58:40.417905 kubelet[2514]: E0421 09:58:40.417892 2514 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 09:58:40.417950 kubelet[2514]: W0421 09:58:40.417906 2514 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 09:58:40.417950 kubelet[2514]: E0421 09:58:40.417916 2514 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 09:58:40.418100 kubelet[2514]: E0421 09:58:40.418089 2514 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 09:58:40.418132 kubelet[2514]: W0421 09:58:40.418100 2514 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 09:58:40.418132 kubelet[2514]: E0421 09:58:40.418109 2514 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 09:58:40.418321 kubelet[2514]: E0421 09:58:40.418310 2514 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 09:58:40.418321 kubelet[2514]: W0421 09:58:40.418321 2514 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 09:58:40.418382 kubelet[2514]: E0421 09:58:40.418330 2514 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 09:58:40.418628 kubelet[2514]: E0421 09:58:40.418613 2514 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 09:58:40.418628 kubelet[2514]: W0421 09:58:40.418627 2514 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 09:58:40.418691 kubelet[2514]: E0421 09:58:40.418636 2514 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 09:58:40.418853 kubelet[2514]: E0421 09:58:40.418841 2514 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 09:58:40.418853 kubelet[2514]: W0421 09:58:40.418852 2514 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 09:58:40.418920 kubelet[2514]: E0421 09:58:40.418861 2514 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 09:58:40.980406 systemd[1]: run-containerd-runc-k8s.io-3145a30b6e721bd7cd2348b79ced87541e927a5f77cc45f8ab3062637d4d3cee-runc.WosFyy.mount: Deactivated successfully. Apr 21 09:58:41.233376 kubelet[2514]: E0421 09:58:41.232358 2514 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lnmtt" podUID="8e8c68fe-3eb8-447c-8335-6bfebeaa92f8" Apr 21 09:58:41.349780 kubelet[2514]: I0421 09:58:41.349751 2514 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 21 09:58:41.399912 kubelet[2514]: E0421 09:58:41.399780 2514 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 09:58:41.399912 kubelet[2514]: W0421 09:58:41.399804 2514 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 09:58:41.401061 kubelet[2514]: E0421 09:58:41.399952 2514 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 09:58:41.401061 kubelet[2514]: E0421 09:58:41.400232 2514 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 09:58:41.401061 kubelet[2514]: W0421 09:58:41.400243 2514 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 09:58:41.401061 kubelet[2514]: E0421 09:58:41.400254 2514 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 09:58:41.401061 kubelet[2514]: E0421 09:58:41.400531 2514 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 09:58:41.401061 kubelet[2514]: W0421 09:58:41.400544 2514 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 09:58:41.401061 kubelet[2514]: E0421 09:58:41.400555 2514 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 09:58:41.401061 kubelet[2514]: E0421 09:58:41.400726 2514 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 09:58:41.401061 kubelet[2514]: W0421 09:58:41.400733 2514 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 09:58:41.401061 kubelet[2514]: E0421 09:58:41.400742 2514 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 09:58:41.401594 kubelet[2514]: E0421 09:58:41.400900 2514 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 09:58:41.401594 kubelet[2514]: W0421 09:58:41.400908 2514 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 09:58:41.401594 kubelet[2514]: E0421 09:58:41.400917 2514 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 09:58:41.401594 kubelet[2514]: E0421 09:58:41.401061 2514 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 09:58:41.401594 kubelet[2514]: W0421 09:58:41.401068 2514 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 09:58:41.401594 kubelet[2514]: E0421 09:58:41.401076 2514 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 09:58:41.401594 kubelet[2514]: E0421 09:58:41.401272 2514 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 09:58:41.401594 kubelet[2514]: W0421 09:58:41.401280 2514 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 09:58:41.401594 kubelet[2514]: E0421 09:58:41.401288 2514 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 09:58:41.401594 kubelet[2514]: E0421 09:58:41.401439 2514 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 09:58:41.401796 kubelet[2514]: W0421 09:58:41.401446 2514 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 09:58:41.401796 kubelet[2514]: E0421 09:58:41.401453 2514 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 09:58:41.402309 kubelet[2514]: E0421 09:58:41.402287 2514 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 09:58:41.402309 kubelet[2514]: W0421 09:58:41.402305 2514 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 09:58:41.402497 kubelet[2514]: E0421 09:58:41.402319 2514 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 09:58:41.402890 kubelet[2514]: E0421 09:58:41.402647 2514 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 09:58:41.402890 kubelet[2514]: W0421 09:58:41.402660 2514 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 09:58:41.402890 kubelet[2514]: E0421 09:58:41.402671 2514 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 09:58:41.403096 kubelet[2514]: E0421 09:58:41.403058 2514 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 09:58:41.403096 kubelet[2514]: W0421 09:58:41.403069 2514 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 09:58:41.403096 kubelet[2514]: E0421 09:58:41.403080 2514 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 09:58:41.403616 kubelet[2514]: E0421 09:58:41.403295 2514 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 09:58:41.403616 kubelet[2514]: W0421 09:58:41.403305 2514 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 09:58:41.403616 kubelet[2514]: E0421 09:58:41.403315 2514 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 09:58:41.403616 kubelet[2514]: E0421 09:58:41.403517 2514 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 09:58:41.403616 kubelet[2514]: W0421 09:58:41.403526 2514 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 09:58:41.403616 kubelet[2514]: E0421 09:58:41.403534 2514 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 09:58:41.403993 kubelet[2514]: E0421 09:58:41.403918 2514 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 09:58:41.403993 kubelet[2514]: W0421 09:58:41.403933 2514 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 09:58:41.403993 kubelet[2514]: E0421 09:58:41.403943 2514 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 09:58:41.404282 kubelet[2514]: E0421 09:58:41.404200 2514 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 09:58:41.404282 kubelet[2514]: W0421 09:58:41.404211 2514 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 09:58:41.404282 kubelet[2514]: E0421 09:58:41.404235 2514 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 09:58:41.420898 kubelet[2514]: E0421 09:58:41.420795 2514 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 09:58:41.420898 kubelet[2514]: W0421 09:58:41.420851 2514 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 09:58:41.420898 kubelet[2514]: E0421 09:58:41.420876 2514 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 09:58:41.421840 kubelet[2514]: E0421 09:58:41.421503 2514 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 09:58:41.421840 kubelet[2514]: W0421 09:58:41.421518 2514 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 09:58:41.421840 kubelet[2514]: E0421 09:58:41.421531 2514 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 09:58:41.422195 kubelet[2514]: E0421 09:58:41.422013 2514 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 09:58:41.422275 kubelet[2514]: W0421 09:58:41.422026 2514 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 09:58:41.422398 kubelet[2514]: E0421 09:58:41.422327 2514 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 09:58:41.422694 kubelet[2514]: E0421 09:58:41.422679 2514 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 09:58:41.422694 kubelet[2514]: W0421 09:58:41.422695 2514 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 09:58:41.422855 kubelet[2514]: E0421 09:58:41.422708 2514 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 09:58:41.423225 kubelet[2514]: E0421 09:58:41.423201 2514 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 09:58:41.423225 kubelet[2514]: W0421 09:58:41.423220 2514 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 09:58:41.423335 kubelet[2514]: E0421 09:58:41.423235 2514 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 09:58:41.423773 kubelet[2514]: E0421 09:58:41.423706 2514 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 09:58:41.423773 kubelet[2514]: W0421 09:58:41.423724 2514 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 09:58:41.423773 kubelet[2514]: E0421 09:58:41.423738 2514 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 09:58:41.424049 kubelet[2514]: E0421 09:58:41.424030 2514 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 09:58:41.424049 kubelet[2514]: W0421 09:58:41.424048 2514 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 09:58:41.424279 kubelet[2514]: E0421 09:58:41.424060 2514 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 09:58:41.424556 kubelet[2514]: E0421 09:58:41.424539 2514 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 09:58:41.424695 kubelet[2514]: W0421 09:58:41.424640 2514 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 09:58:41.424695 kubelet[2514]: E0421 09:58:41.424661 2514 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 09:58:41.425131 kubelet[2514]: E0421 09:58:41.424897 2514 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 09:58:41.425131 kubelet[2514]: W0421 09:58:41.424908 2514 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 09:58:41.425131 kubelet[2514]: E0421 09:58:41.424921 2514 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 09:58:41.425589 kubelet[2514]: E0421 09:58:41.425478 2514 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 09:58:41.425589 kubelet[2514]: W0421 09:58:41.425493 2514 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 09:58:41.425589 kubelet[2514]: E0421 09:58:41.425505 2514 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 09:58:41.425961 kubelet[2514]: E0421 09:58:41.425852 2514 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 09:58:41.425961 kubelet[2514]: W0421 09:58:41.425867 2514 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 09:58:41.425961 kubelet[2514]: E0421 09:58:41.425879 2514 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 09:58:41.427045 kubelet[2514]: E0421 09:58:41.426904 2514 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 09:58:41.427045 kubelet[2514]: W0421 09:58:41.426920 2514 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 09:58:41.427045 kubelet[2514]: E0421 09:58:41.426934 2514 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 09:58:41.427624 kubelet[2514]: E0421 09:58:41.427413 2514 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 09:58:41.427624 kubelet[2514]: W0421 09:58:41.427428 2514 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 09:58:41.427624 kubelet[2514]: E0421 09:58:41.427442 2514 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 09:58:41.427838 kubelet[2514]: E0421 09:58:41.427771 2514 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 09:58:41.427838 kubelet[2514]: W0421 09:58:41.427785 2514 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 09:58:41.427838 kubelet[2514]: E0421 09:58:41.427803 2514 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 09:58:41.428072 kubelet[2514]: E0421 09:58:41.428059 2514 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 09:58:41.428072 kubelet[2514]: W0421 09:58:41.428072 2514 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 09:58:41.428130 kubelet[2514]: E0421 09:58:41.428082 2514 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 09:58:41.428320 kubelet[2514]: E0421 09:58:41.428304 2514 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 09:58:41.428320 kubelet[2514]: W0421 09:58:41.428317 2514 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 09:58:41.428420 kubelet[2514]: E0421 09:58:41.428327 2514 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 09:58:41.428522 kubelet[2514]: E0421 09:58:41.428511 2514 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 09:58:41.428558 kubelet[2514]: W0421 09:58:41.428522 2514 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 09:58:41.428558 kubelet[2514]: E0421 09:58:41.428532 2514 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 09:58:41.429271 kubelet[2514]: E0421 09:58:41.429151 2514 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 09:58:41.429271 kubelet[2514]: W0421 09:58:41.429165 2514 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 09:58:41.429271 kubelet[2514]: E0421 09:58:41.429175 2514 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 09:58:41.433530 containerd[1482]: time="2026-04-21T09:58:41.433482628Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 09:58:41.434765 containerd[1482]: time="2026-04-21T09:58:41.434587841Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4: active requests=0, bytes read=4457682" Apr 21 09:58:41.435737 containerd[1482]: time="2026-04-21T09:58:41.435602579Z" level=info msg="ImageCreate event name:\"sha256:449a6463eaa02e13b190ef7c4057191febcc65ab9418bae3bc0995f5bce65798\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 09:58:41.438499 containerd[1482]: time="2026-04-21T09:58:41.438448662Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 09:58:41.439372 containerd[1482]: time="2026-04-21T09:58:41.439219108Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" with image id \"sha256:449a6463eaa02e13b190ef7c4057191febcc65ab9418bae3bc0995f5bce65798\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\", size \"5855167\" in 1.467259864s" Apr 21 09:58:41.439372 containerd[1482]: time="2026-04-21T09:58:41.439284336Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" returns image reference \"sha256:449a6463eaa02e13b190ef7c4057191febcc65ab9418bae3bc0995f5bce65798\"" Apr 21 09:58:41.443871 containerd[1482]: time="2026-04-21T09:58:41.443815253Z" level=info msg="CreateContainer within sandbox \"fb1d76d34a95828f68da2fd659beb366e9567f3bd500fdae303a570d0185df94\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Apr 21 09:58:41.458048 containerd[1482]: time="2026-04-21T09:58:41.457994289Z" level=info msg="CreateContainer within sandbox \"fb1d76d34a95828f68da2fd659beb366e9567f3bd500fdae303a570d0185df94\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"e0b9c61e09c731178b5bdff2cdd646fd4760dfa45fd1ba8089d819951472fb57\"" Apr 21 09:58:41.459530 containerd[1482]: time="2026-04-21T09:58:41.459494038Z" level=info msg="StartContainer for \"e0b9c61e09c731178b5bdff2cdd646fd4760dfa45fd1ba8089d819951472fb57\"" Apr 21 09:58:41.497131 systemd[1]: Started cri-containerd-e0b9c61e09c731178b5bdff2cdd646fd4760dfa45fd1ba8089d819951472fb57.scope - libcontainer container e0b9c61e09c731178b5bdff2cdd646fd4760dfa45fd1ba8089d819951472fb57. Apr 21 09:58:41.540129 containerd[1482]: time="2026-04-21T09:58:41.540031597Z" level=info msg="StartContainer for \"e0b9c61e09c731178b5bdff2cdd646fd4760dfa45fd1ba8089d819951472fb57\" returns successfully" Apr 21 09:58:41.555746 systemd[1]: cri-containerd-e0b9c61e09c731178b5bdff2cdd646fd4760dfa45fd1ba8089d819951472fb57.scope: Deactivated successfully. Apr 21 09:58:41.641553 containerd[1482]: time="2026-04-21T09:58:41.641491904Z" level=info msg="shim disconnected" id=e0b9c61e09c731178b5bdff2cdd646fd4760dfa45fd1ba8089d819951472fb57 namespace=k8s.io Apr 21 09:58:41.641811 containerd[1482]: time="2026-04-21T09:58:41.641790023Z" level=warning msg="cleaning up after shim disconnected" id=e0b9c61e09c731178b5bdff2cdd646fd4760dfa45fd1ba8089d819951472fb57 namespace=k8s.io Apr 21 09:58:41.641931 containerd[1482]: time="2026-04-21T09:58:41.641911687Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 21 09:58:41.980891 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e0b9c61e09c731178b5bdff2cdd646fd4760dfa45fd1ba8089d819951472fb57-rootfs.mount: Deactivated successfully. Apr 21 09:58:42.357342 containerd[1482]: time="2026-04-21T09:58:42.356699015Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\"" Apr 21 09:58:43.240070 kubelet[2514]: E0421 09:58:43.239687 2514 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lnmtt" podUID="8e8c68fe-3eb8-447c-8335-6bfebeaa92f8" Apr 21 09:58:45.230057 kubelet[2514]: E0421 09:58:45.229499 2514 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lnmtt" podUID="8e8c68fe-3eb8-447c-8335-6bfebeaa92f8" Apr 21 09:58:47.231442 kubelet[2514]: E0421 09:58:47.230057 2514 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lnmtt" podUID="8e8c68fe-3eb8-447c-8335-6bfebeaa92f8" Apr 21 09:58:47.459405 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3829662306.mount: Deactivated successfully. Apr 21 09:58:47.489655 containerd[1482]: time="2026-04-21T09:58:47.488689555Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 09:58:47.490076 containerd[1482]: time="2026-04-21T09:58:47.490051410Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.31.4: active requests=0, bytes read=153921674" Apr 21 09:58:47.490920 containerd[1482]: time="2026-04-21T09:58:47.490891046Z" level=info msg="ImageCreate event name:\"sha256:27be54f2b9e47d96c7e9e5ad16e26ec298c1829f31885c81a622d50472c8ac97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 09:58:47.494874 containerd[1482]: time="2026-04-21T09:58:47.494847489Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 09:58:47.495703 containerd[1482]: time="2026-04-21T09:58:47.495658681Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.31.4\" with image id \"sha256:27be54f2b9e47d96c7e9e5ad16e26ec298c1829f31885c81a622d50472c8ac97\", repo tag \"ghcr.io/flatcar/calico/node:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\", size \"153921536\" in 5.138914067s" Apr 21 09:58:47.495769 containerd[1482]: time="2026-04-21T09:58:47.495707237Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\" returns image reference \"sha256:27be54f2b9e47d96c7e9e5ad16e26ec298c1829f31885c81a622d50472c8ac97\"" Apr 21 09:58:47.500935 containerd[1482]: time="2026-04-21T09:58:47.500898095Z" level=info msg="CreateContainer within sandbox \"fb1d76d34a95828f68da2fd659beb366e9567f3bd500fdae303a570d0185df94\" for container &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,}" Apr 21 09:58:47.520516 containerd[1482]: time="2026-04-21T09:58:47.520472703Z" level=info msg="CreateContainer within sandbox \"fb1d76d34a95828f68da2fd659beb366e9567f3bd500fdae303a570d0185df94\" for &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,} returns container id \"2969badf54826114d91518f1a358aa4e76eb593764d84abffce1b52a249dd8d4\"" Apr 21 09:58:47.521271 containerd[1482]: time="2026-04-21T09:58:47.521112426Z" level=info msg="StartContainer for \"2969badf54826114d91518f1a358aa4e76eb593764d84abffce1b52a249dd8d4\"" Apr 21 09:58:47.553027 systemd[1]: Started cri-containerd-2969badf54826114d91518f1a358aa4e76eb593764d84abffce1b52a249dd8d4.scope - libcontainer container 2969badf54826114d91518f1a358aa4e76eb593764d84abffce1b52a249dd8d4. Apr 21 09:58:47.587828 containerd[1482]: time="2026-04-21T09:58:47.587776820Z" level=info msg="StartContainer for \"2969badf54826114d91518f1a358aa4e76eb593764d84abffce1b52a249dd8d4\" returns successfully" Apr 21 09:58:47.689411 systemd[1]: cri-containerd-2969badf54826114d91518f1a358aa4e76eb593764d84abffce1b52a249dd8d4.scope: Deactivated successfully. Apr 21 09:58:47.804168 containerd[1482]: time="2026-04-21T09:58:47.803858906Z" level=info msg="shim disconnected" id=2969badf54826114d91518f1a358aa4e76eb593764d84abffce1b52a249dd8d4 namespace=k8s.io Apr 21 09:58:47.804168 containerd[1482]: time="2026-04-21T09:58:47.803918599Z" level=warning msg="cleaning up after shim disconnected" id=2969badf54826114d91518f1a358aa4e76eb593764d84abffce1b52a249dd8d4 namespace=k8s.io Apr 21 09:58:47.804168 containerd[1482]: time="2026-04-21T09:58:47.803929456Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 21 09:58:47.817229 containerd[1482]: time="2026-04-21T09:58:47.816965814Z" level=warning msg="cleanup warnings time=\"2026-04-21T09:58:47Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Apr 21 09:58:48.373378 containerd[1482]: time="2026-04-21T09:58:48.372837406Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\"" Apr 21 09:58:48.463741 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2969badf54826114d91518f1a358aa4e76eb593764d84abffce1b52a249dd8d4-rootfs.mount: Deactivated successfully. Apr 21 09:58:49.233799 kubelet[2514]: E0421 09:58:49.233735 2514 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lnmtt" podUID="8e8c68fe-3eb8-447c-8335-6bfebeaa92f8" Apr 21 09:58:50.854792 containerd[1482]: time="2026-04-21T09:58:50.854726876Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 09:58:50.856477 containerd[1482]: time="2026-04-21T09:58:50.856433068Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.31.4: active requests=0, bytes read=66009216" Apr 21 09:58:50.857173 containerd[1482]: time="2026-04-21T09:58:50.857053761Z" level=info msg="ImageCreate event name:\"sha256:c10bed152367fad8c19e9400f12b748d6fbc20498086983df13e70e36f24511b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 09:58:50.864538 containerd[1482]: time="2026-04-21T09:58:50.864473798Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 09:58:50.865767 containerd[1482]: time="2026-04-21T09:58:50.865446019Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.31.4\" with image id \"sha256:c10bed152367fad8c19e9400f12b748d6fbc20498086983df13e70e36f24511b\", repo tag \"ghcr.io/flatcar/calico/cni:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\", size \"67406741\" in 2.492544808s" Apr 21 09:58:50.865767 containerd[1482]: time="2026-04-21T09:58:50.865490706Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\" returns image reference \"sha256:c10bed152367fad8c19e9400f12b748d6fbc20498086983df13e70e36f24511b\"" Apr 21 09:58:50.871748 containerd[1482]: time="2026-04-21T09:58:50.871700352Z" level=info msg="CreateContainer within sandbox \"fb1d76d34a95828f68da2fd659beb366e9567f3bd500fdae303a570d0185df94\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Apr 21 09:58:50.890129 containerd[1482]: time="2026-04-21T09:58:50.890053998Z" level=info msg="CreateContainer within sandbox \"fb1d76d34a95828f68da2fd659beb366e9567f3bd500fdae303a570d0185df94\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"d6774c271d378d96bdf4b165b12e76df6a1766124eac2316f0b2c8b95bde8e07\"" Apr 21 09:58:50.892060 containerd[1482]: time="2026-04-21T09:58:50.891017931Z" level=info msg="StartContainer for \"d6774c271d378d96bdf4b165b12e76df6a1766124eac2316f0b2c8b95bde8e07\"" Apr 21 09:58:50.932708 systemd[1]: Started cri-containerd-d6774c271d378d96bdf4b165b12e76df6a1766124eac2316f0b2c8b95bde8e07.scope - libcontainer container d6774c271d378d96bdf4b165b12e76df6a1766124eac2316f0b2c8b95bde8e07. Apr 21 09:58:50.966217 containerd[1482]: time="2026-04-21T09:58:50.966136748Z" level=info msg="StartContainer for \"d6774c271d378d96bdf4b165b12e76df6a1766124eac2316f0b2c8b95bde8e07\" returns successfully" Apr 21 09:58:51.230314 kubelet[2514]: E0421 09:58:51.229675 2514 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lnmtt" podUID="8e8c68fe-3eb8-447c-8335-6bfebeaa92f8" Apr 21 09:58:51.546321 systemd[1]: cri-containerd-d6774c271d378d96bdf4b165b12e76df6a1766124eac2316f0b2c8b95bde8e07.scope: Deactivated successfully. Apr 21 09:58:51.568635 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d6774c271d378d96bdf4b165b12e76df6a1766124eac2316f0b2c8b95bde8e07-rootfs.mount: Deactivated successfully. Apr 21 09:58:51.593500 kubelet[2514]: I0421 09:58:51.593468 2514 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Apr 21 09:58:51.625851 containerd[1482]: time="2026-04-21T09:58:51.625619232Z" level=info msg="shim disconnected" id=d6774c271d378d96bdf4b165b12e76df6a1766124eac2316f0b2c8b95bde8e07 namespace=k8s.io Apr 21 09:58:51.625851 containerd[1482]: time="2026-04-21T09:58:51.625686374Z" level=warning msg="cleaning up after shim disconnected" id=d6774c271d378d96bdf4b165b12e76df6a1766124eac2316f0b2c8b95bde8e07 namespace=k8s.io Apr 21 09:58:51.625851 containerd[1482]: time="2026-04-21T09:58:51.625700747Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 21 09:58:51.680196 systemd[1]: Created slice kubepods-burstable-podb02e60bd_7bf5_4d67_90f0_2644154eb8f8.slice - libcontainer container kubepods-burstable-podb02e60bd_7bf5_4d67_90f0_2644154eb8f8.slice. Apr 21 09:58:51.695337 systemd[1]: Created slice kubepods-burstable-pod7106ec2c_c759_48f7_8dd6_aae54768ee28.slice - libcontainer container kubepods-burstable-pod7106ec2c_c759_48f7_8dd6_aae54768ee28.slice. Apr 21 09:58:51.698926 kubelet[2514]: I0421 09:58:51.698894 2514 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fczq4\" (UniqueName: \"kubernetes.io/projected/43fc62f9-80c2-420c-b7a3-69da00af4f8f-kube-api-access-fczq4\") pod \"calico-apiserver-b79cb9f78-rwr6l\" (UID: \"43fc62f9-80c2-420c-b7a3-69da00af4f8f\") " pod="calico-system/calico-apiserver-b79cb9f78-rwr6l" Apr 21 09:58:51.699138 kubelet[2514]: I0421 09:58:51.699123 2514 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-877bl\" (UniqueName: \"kubernetes.io/projected/b02e60bd-7bf5-4d67-90f0-2644154eb8f8-kube-api-access-877bl\") pod \"coredns-674b8bbfcf-nm2rg\" (UID: \"b02e60bd-7bf5-4d67-90f0-2644154eb8f8\") " pod="kube-system/coredns-674b8bbfcf-nm2rg" Apr 21 09:58:51.699229 kubelet[2514]: I0421 09:58:51.699215 2514 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7106ec2c-c759-48f7-8dd6-aae54768ee28-config-volume\") pod \"coredns-674b8bbfcf-598qp\" (UID: \"7106ec2c-c759-48f7-8dd6-aae54768ee28\") " pod="kube-system/coredns-674b8bbfcf-598qp" Apr 21 09:58:51.699314 kubelet[2514]: I0421 09:58:51.699301 2514 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b02e60bd-7bf5-4d67-90f0-2644154eb8f8-config-volume\") pod \"coredns-674b8bbfcf-nm2rg\" (UID: \"b02e60bd-7bf5-4d67-90f0-2644154eb8f8\") " pod="kube-system/coredns-674b8bbfcf-nm2rg" Apr 21 09:58:51.699381 kubelet[2514]: I0421 09:58:51.699370 2514 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hzs65\" (UniqueName: \"kubernetes.io/projected/7106ec2c-c759-48f7-8dd6-aae54768ee28-kube-api-access-hzs65\") pod \"coredns-674b8bbfcf-598qp\" (UID: \"7106ec2c-c759-48f7-8dd6-aae54768ee28\") " pod="kube-system/coredns-674b8bbfcf-598qp" Apr 21 09:58:51.699975 kubelet[2514]: I0421 09:58:51.699439 2514 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/43fc62f9-80c2-420c-b7a3-69da00af4f8f-calico-apiserver-certs\") pod \"calico-apiserver-b79cb9f78-rwr6l\" (UID: \"43fc62f9-80c2-420c-b7a3-69da00af4f8f\") " pod="calico-system/calico-apiserver-b79cb9f78-rwr6l" Apr 21 09:58:51.702683 systemd[1]: Created slice kubepods-besteffort-pod076dd66b_5522_4ecd_a955_84e896c699d3.slice - libcontainer container kubepods-besteffort-pod076dd66b_5522_4ecd_a955_84e896c699d3.slice. Apr 21 09:58:51.713104 systemd[1]: Created slice kubepods-besteffort-pod6c6d8cb6_7cee_40f4_a210_b9a1efc80ea9.slice - libcontainer container kubepods-besteffort-pod6c6d8cb6_7cee_40f4_a210_b9a1efc80ea9.slice. Apr 21 09:58:51.722485 systemd[1]: Created slice kubepods-besteffort-pod43fc62f9_80c2_420c_b7a3_69da00af4f8f.slice - libcontainer container kubepods-besteffort-pod43fc62f9_80c2_420c_b7a3_69da00af4f8f.slice. Apr 21 09:58:51.729799 systemd[1]: Created slice kubepods-besteffort-pod3ec3bce5_d5f3_44fd_a00a_602613380bcc.slice - libcontainer container kubepods-besteffort-pod3ec3bce5_d5f3_44fd_a00a_602613380bcc.slice. Apr 21 09:58:51.735863 systemd[1]: Created slice kubepods-besteffort-podcb94aaf5_d4ea_43a3_a742_62679734ea77.slice - libcontainer container kubepods-besteffort-podcb94aaf5_d4ea_43a3_a742_62679734ea77.slice. Apr 21 09:58:51.800947 kubelet[2514]: I0421 09:58:51.800001 2514 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/6c6d8cb6-7cee-40f4-a210-b9a1efc80ea9-calico-apiserver-certs\") pod \"calico-apiserver-b79cb9f78-7tssf\" (UID: \"6c6d8cb6-7cee-40f4-a210-b9a1efc80ea9\") " pod="calico-system/calico-apiserver-b79cb9f78-7tssf" Apr 21 09:58:51.800947 kubelet[2514]: I0421 09:58:51.800054 2514 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kvnz6\" (UniqueName: \"kubernetes.io/projected/076dd66b-5522-4ecd-a955-84e896c699d3-kube-api-access-kvnz6\") pod \"calico-kube-controllers-cfd49cd7d-twdnt\" (UID: \"076dd66b-5522-4ecd-a955-84e896c699d3\") " pod="calico-system/calico-kube-controllers-cfd49cd7d-twdnt" Apr 21 09:58:51.800947 kubelet[2514]: I0421 09:58:51.800127 2514 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jd2m9\" (UniqueName: \"kubernetes.io/projected/6c6d8cb6-7cee-40f4-a210-b9a1efc80ea9-kube-api-access-jd2m9\") pod \"calico-apiserver-b79cb9f78-7tssf\" (UID: \"6c6d8cb6-7cee-40f4-a210-b9a1efc80ea9\") " pod="calico-system/calico-apiserver-b79cb9f78-7tssf" Apr 21 09:58:51.800947 kubelet[2514]: I0421 09:58:51.800151 2514 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cb94aaf5-d4ea-43a3-a742-62679734ea77-config\") pod \"goldmane-5b85766d88-hfxp8\" (UID: \"cb94aaf5-d4ea-43a3-a742-62679734ea77\") " pod="calico-system/goldmane-5b85766d88-hfxp8" Apr 21 09:58:51.800947 kubelet[2514]: I0421 09:58:51.800187 2514 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l7q29\" (UniqueName: \"kubernetes.io/projected/cb94aaf5-d4ea-43a3-a742-62679734ea77-kube-api-access-l7q29\") pod \"goldmane-5b85766d88-hfxp8\" (UID: \"cb94aaf5-d4ea-43a3-a742-62679734ea77\") " pod="calico-system/goldmane-5b85766d88-hfxp8" Apr 21 09:58:51.801287 kubelet[2514]: I0421 09:58:51.800211 2514 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cb94aaf5-d4ea-43a3-a742-62679734ea77-goldmane-ca-bundle\") pod \"goldmane-5b85766d88-hfxp8\" (UID: \"cb94aaf5-d4ea-43a3-a742-62679734ea77\") " pod="calico-system/goldmane-5b85766d88-hfxp8" Apr 21 09:58:51.801287 kubelet[2514]: I0421 09:58:51.800250 2514 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/076dd66b-5522-4ecd-a955-84e896c699d3-tigera-ca-bundle\") pod \"calico-kube-controllers-cfd49cd7d-twdnt\" (UID: \"076dd66b-5522-4ecd-a955-84e896c699d3\") " pod="calico-system/calico-kube-controllers-cfd49cd7d-twdnt" Apr 21 09:58:51.801287 kubelet[2514]: I0421 09:58:51.800269 2514 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/cb94aaf5-d4ea-43a3-a742-62679734ea77-goldmane-key-pair\") pod \"goldmane-5b85766d88-hfxp8\" (UID: \"cb94aaf5-d4ea-43a3-a742-62679734ea77\") " pod="calico-system/goldmane-5b85766d88-hfxp8" Apr 21 09:58:51.801287 kubelet[2514]: I0421 09:58:51.800320 2514 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/3ec3bce5-d5f3-44fd-a00a-602613380bcc-nginx-config\") pod \"whisker-98d5c77f5-775kb\" (UID: \"3ec3bce5-d5f3-44fd-a00a-602613380bcc\") " pod="calico-system/whisker-98d5c77f5-775kb" Apr 21 09:58:51.801287 kubelet[2514]: I0421 09:58:51.800340 2514 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/3ec3bce5-d5f3-44fd-a00a-602613380bcc-whisker-backend-key-pair\") pod \"whisker-98d5c77f5-775kb\" (UID: \"3ec3bce5-d5f3-44fd-a00a-602613380bcc\") " pod="calico-system/whisker-98d5c77f5-775kb" Apr 21 09:58:51.801472 kubelet[2514]: I0421 09:58:51.800363 2514 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3ec3bce5-d5f3-44fd-a00a-602613380bcc-whisker-ca-bundle\") pod \"whisker-98d5c77f5-775kb\" (UID: \"3ec3bce5-d5f3-44fd-a00a-602613380bcc\") " pod="calico-system/whisker-98d5c77f5-775kb" Apr 21 09:58:51.801472 kubelet[2514]: I0421 09:58:51.800386 2514 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vpfjj\" (UniqueName: \"kubernetes.io/projected/3ec3bce5-d5f3-44fd-a00a-602613380bcc-kube-api-access-vpfjj\") pod \"whisker-98d5c77f5-775kb\" (UID: \"3ec3bce5-d5f3-44fd-a00a-602613380bcc\") " pod="calico-system/whisker-98d5c77f5-775kb" Apr 21 09:58:51.992421 containerd[1482]: time="2026-04-21T09:58:51.992375218Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-nm2rg,Uid:b02e60bd-7bf5-4d67-90f0-2644154eb8f8,Namespace:kube-system,Attempt:0,}" Apr 21 09:58:52.001594 containerd[1482]: time="2026-04-21T09:58:52.001355796Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-598qp,Uid:7106ec2c-c759-48f7-8dd6-aae54768ee28,Namespace:kube-system,Attempt:0,}" Apr 21 09:58:52.016618 containerd[1482]: time="2026-04-21T09:58:52.016313166Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-cfd49cd7d-twdnt,Uid:076dd66b-5522-4ecd-a955-84e896c699d3,Namespace:calico-system,Attempt:0,}" Apr 21 09:58:52.023842 containerd[1482]: time="2026-04-21T09:58:52.023758798Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-b79cb9f78-7tssf,Uid:6c6d8cb6-7cee-40f4-a210-b9a1efc80ea9,Namespace:calico-system,Attempt:0,}" Apr 21 09:58:52.029513 containerd[1482]: time="2026-04-21T09:58:52.029427480Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-b79cb9f78-rwr6l,Uid:43fc62f9-80c2-420c-b7a3-69da00af4f8f,Namespace:calico-system,Attempt:0,}" Apr 21 09:58:52.034561 containerd[1482]: time="2026-04-21T09:58:52.034501843Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-98d5c77f5-775kb,Uid:3ec3bce5-d5f3-44fd-a00a-602613380bcc,Namespace:calico-system,Attempt:0,}" Apr 21 09:58:52.044691 containerd[1482]: time="2026-04-21T09:58:52.043849325Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-hfxp8,Uid:cb94aaf5-d4ea-43a3-a742-62679734ea77,Namespace:calico-system,Attempt:0,}" Apr 21 09:58:52.182867 kubelet[2514]: I0421 09:58:52.181061 2514 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 21 09:58:52.284486 containerd[1482]: time="2026-04-21T09:58:52.284441898Z" level=error msg="Failed to destroy network for sandbox \"d50ced9d536f67c6cec3a4c3bf4e9e3fbacbdfe0dc7f764a964d798a44792183\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 09:58:52.285020 containerd[1482]: time="2026-04-21T09:58:52.284991421Z" level=error msg="encountered an error cleaning up failed sandbox \"d50ced9d536f67c6cec3a4c3bf4e9e3fbacbdfe0dc7f764a964d798a44792183\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 09:58:52.285167 containerd[1482]: time="2026-04-21T09:58:52.285142742Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-cfd49cd7d-twdnt,Uid:076dd66b-5522-4ecd-a955-84e896c699d3,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d50ced9d536f67c6cec3a4c3bf4e9e3fbacbdfe0dc7f764a964d798a44792183\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 09:58:52.285671 kubelet[2514]: E0421 09:58:52.285613 2514 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d50ced9d536f67c6cec3a4c3bf4e9e3fbacbdfe0dc7f764a964d798a44792183\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 09:58:52.285995 kubelet[2514]: E0421 09:58:52.285703 2514 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d50ced9d536f67c6cec3a4c3bf4e9e3fbacbdfe0dc7f764a964d798a44792183\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-cfd49cd7d-twdnt" Apr 21 09:58:52.285995 kubelet[2514]: E0421 09:58:52.285724 2514 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d50ced9d536f67c6cec3a4c3bf4e9e3fbacbdfe0dc7f764a964d798a44792183\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-cfd49cd7d-twdnt" Apr 21 09:58:52.285995 kubelet[2514]: E0421 09:58:52.285940 2514 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-cfd49cd7d-twdnt_calico-system(076dd66b-5522-4ecd-a955-84e896c699d3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-cfd49cd7d-twdnt_calico-system(076dd66b-5522-4ecd-a955-84e896c699d3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d50ced9d536f67c6cec3a4c3bf4e9e3fbacbdfe0dc7f764a964d798a44792183\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-cfd49cd7d-twdnt" podUID="076dd66b-5522-4ecd-a955-84e896c699d3" Apr 21 09:58:52.288353 containerd[1482]: time="2026-04-21T09:58:52.286322772Z" level=error msg="Failed to destroy network for sandbox \"0aed76e5ecaf297b2736f954eb1d225b925d99cef754bef7e65c92a81b3d42da\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 09:58:52.288353 containerd[1482]: time="2026-04-21T09:58:52.286598994Z" level=error msg="Failed to destroy network for sandbox \"639a90f1031a191cb00e70a220d80a3b4b71dbf44d915b61414a094f3d38903f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 09:58:52.288353 containerd[1482]: time="2026-04-21T09:58:52.287769136Z" level=error msg="encountered an error cleaning up failed sandbox \"639a90f1031a191cb00e70a220d80a3b4b71dbf44d915b61414a094f3d38903f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 09:58:52.289747 containerd[1482]: time="2026-04-21T09:58:52.289709617Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-598qp,Uid:7106ec2c-c759-48f7-8dd6-aae54768ee28,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"639a90f1031a191cb00e70a220d80a3b4b71dbf44d915b61414a094f3d38903f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 09:58:52.290187 kubelet[2514]: E0421 09:58:52.290099 2514 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"639a90f1031a191cb00e70a220d80a3b4b71dbf44d915b61414a094f3d38903f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 09:58:52.290187 kubelet[2514]: E0421 09:58:52.290160 2514 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"639a90f1031a191cb00e70a220d80a3b4b71dbf44d915b61414a094f3d38903f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-598qp" Apr 21 09:58:52.290485 kubelet[2514]: E0421 09:58:52.290455 2514 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"639a90f1031a191cb00e70a220d80a3b4b71dbf44d915b61414a094f3d38903f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-598qp" Apr 21 09:58:52.290585 kubelet[2514]: E0421 09:58:52.290528 2514 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-598qp_kube-system(7106ec2c-c759-48f7-8dd6-aae54768ee28)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-598qp_kube-system(7106ec2c-c759-48f7-8dd6-aae54768ee28)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"639a90f1031a191cb00e70a220d80a3b4b71dbf44d915b61414a094f3d38903f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-598qp" podUID="7106ec2c-c759-48f7-8dd6-aae54768ee28" Apr 21 09:58:52.291103 containerd[1482]: time="2026-04-21T09:58:52.290977037Z" level=error msg="encountered an error cleaning up failed sandbox \"0aed76e5ecaf297b2736f954eb1d225b925d99cef754bef7e65c92a81b3d42da\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 09:58:52.291103 containerd[1482]: time="2026-04-21T09:58:52.291025436Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-b79cb9f78-rwr6l,Uid:43fc62f9-80c2-420c-b7a3-69da00af4f8f,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"0aed76e5ecaf297b2736f954eb1d225b925d99cef754bef7e65c92a81b3d42da\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 09:58:52.291250 kubelet[2514]: E0421 09:58:52.291163 2514 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0aed76e5ecaf297b2736f954eb1d225b925d99cef754bef7e65c92a81b3d42da\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 09:58:52.291250 kubelet[2514]: E0421 09:58:52.291197 2514 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0aed76e5ecaf297b2736f954eb1d225b925d99cef754bef7e65c92a81b3d42da\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-b79cb9f78-rwr6l" Apr 21 09:58:52.291250 kubelet[2514]: E0421 09:58:52.291213 2514 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0aed76e5ecaf297b2736f954eb1d225b925d99cef754bef7e65c92a81b3d42da\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-b79cb9f78-rwr6l" Apr 21 09:58:52.291362 kubelet[2514]: E0421 09:58:52.291244 2514 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-b79cb9f78-rwr6l_calico-system(43fc62f9-80c2-420c-b7a3-69da00af4f8f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-b79cb9f78-rwr6l_calico-system(43fc62f9-80c2-420c-b7a3-69da00af4f8f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0aed76e5ecaf297b2736f954eb1d225b925d99cef754bef7e65c92a81b3d42da\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-b79cb9f78-rwr6l" podUID="43fc62f9-80c2-420c-b7a3-69da00af4f8f" Apr 21 09:58:52.303473 containerd[1482]: time="2026-04-21T09:58:52.303430459Z" level=error msg="Failed to destroy network for sandbox \"884cd7957d9fda0434bd864702ea9779053a5698d779236dd1d3087362524ca8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 09:58:52.304463 containerd[1482]: time="2026-04-21T09:58:52.304255523Z" level=error msg="encountered an error cleaning up failed sandbox \"884cd7957d9fda0434bd864702ea9779053a5698d779236dd1d3087362524ca8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 09:58:52.306123 containerd[1482]: time="2026-04-21T09:58:52.306089359Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-nm2rg,Uid:b02e60bd-7bf5-4d67-90f0-2644154eb8f8,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"884cd7957d9fda0434bd864702ea9779053a5698d779236dd1d3087362524ca8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 09:58:52.307769 kubelet[2514]: E0421 09:58:52.307274 2514 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"884cd7957d9fda0434bd864702ea9779053a5698d779236dd1d3087362524ca8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 09:58:52.307769 kubelet[2514]: E0421 09:58:52.307364 2514 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"884cd7957d9fda0434bd864702ea9779053a5698d779236dd1d3087362524ca8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-nm2rg" Apr 21 09:58:52.307769 kubelet[2514]: E0421 09:58:52.307387 2514 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"884cd7957d9fda0434bd864702ea9779053a5698d779236dd1d3087362524ca8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-nm2rg" Apr 21 09:58:52.307924 kubelet[2514]: E0421 09:58:52.307440 2514 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-nm2rg_kube-system(b02e60bd-7bf5-4d67-90f0-2644154eb8f8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-nm2rg_kube-system(b02e60bd-7bf5-4d67-90f0-2644154eb8f8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"884cd7957d9fda0434bd864702ea9779053a5698d779236dd1d3087362524ca8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-nm2rg" podUID="b02e60bd-7bf5-4d67-90f0-2644154eb8f8" Apr 21 09:58:52.317733 containerd[1482]: time="2026-04-21T09:58:52.317682849Z" level=error msg="Failed to destroy network for sandbox \"b118c496459eb514921cd9a74eb129d901e646ba194b375b30ab9fd23c588678\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 09:58:52.318250 containerd[1482]: time="2026-04-21T09:58:52.318215917Z" level=error msg="encountered an error cleaning up failed sandbox \"b118c496459eb514921cd9a74eb129d901e646ba194b375b30ab9fd23c588678\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 09:58:52.318452 containerd[1482]: time="2026-04-21T09:58:52.318424365Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-b79cb9f78-7tssf,Uid:6c6d8cb6-7cee-40f4-a210-b9a1efc80ea9,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b118c496459eb514921cd9a74eb129d901e646ba194b375b30ab9fd23c588678\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 09:58:52.318904 kubelet[2514]: E0421 09:58:52.318744 2514 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b118c496459eb514921cd9a74eb129d901e646ba194b375b30ab9fd23c588678\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 09:58:52.318986 kubelet[2514]: E0421 09:58:52.318811 2514 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b118c496459eb514921cd9a74eb129d901e646ba194b375b30ab9fd23c588678\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-b79cb9f78-7tssf" Apr 21 09:58:52.318986 kubelet[2514]: E0421 09:58:52.318944 2514 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b118c496459eb514921cd9a74eb129d901e646ba194b375b30ab9fd23c588678\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-b79cb9f78-7tssf" Apr 21 09:58:52.319280 kubelet[2514]: E0421 09:58:52.319044 2514 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-b79cb9f78-7tssf_calico-system(6c6d8cb6-7cee-40f4-a210-b9a1efc80ea9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-b79cb9f78-7tssf_calico-system(6c6d8cb6-7cee-40f4-a210-b9a1efc80ea9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b118c496459eb514921cd9a74eb129d901e646ba194b375b30ab9fd23c588678\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-b79cb9f78-7tssf" podUID="6c6d8cb6-7cee-40f4-a210-b9a1efc80ea9" Apr 21 09:58:52.335185 containerd[1482]: time="2026-04-21T09:58:52.335135894Z" level=error msg="Failed to destroy network for sandbox \"f53752a87b59767cedc982a405419339c8d99a7a047448cafe949623a67b2fa5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 09:58:52.336214 containerd[1482]: time="2026-04-21T09:58:52.336170286Z" level=error msg="encountered an error cleaning up failed sandbox \"f53752a87b59767cedc982a405419339c8d99a7a047448cafe949623a67b2fa5\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 09:58:52.336303 containerd[1482]: time="2026-04-21T09:58:52.336250631Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-98d5c77f5-775kb,Uid:3ec3bce5-d5f3-44fd-a00a-602613380bcc,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f53752a87b59767cedc982a405419339c8d99a7a047448cafe949623a67b2fa5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 09:58:52.337035 kubelet[2514]: E0421 09:58:52.336510 2514 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f53752a87b59767cedc982a405419339c8d99a7a047448cafe949623a67b2fa5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 09:58:52.337035 kubelet[2514]: E0421 09:58:52.336572 2514 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f53752a87b59767cedc982a405419339c8d99a7a047448cafe949623a67b2fa5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-98d5c77f5-775kb" Apr 21 09:58:52.337035 kubelet[2514]: E0421 09:58:52.336592 2514 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f53752a87b59767cedc982a405419339c8d99a7a047448cafe949623a67b2fa5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-98d5c77f5-775kb" Apr 21 09:58:52.337205 kubelet[2514]: E0421 09:58:52.336638 2514 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-98d5c77f5-775kb_calico-system(3ec3bce5-d5f3-44fd-a00a-602613380bcc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-98d5c77f5-775kb_calico-system(3ec3bce5-d5f3-44fd-a00a-602613380bcc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f53752a87b59767cedc982a405419339c8d99a7a047448cafe949623a67b2fa5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-98d5c77f5-775kb" podUID="3ec3bce5-d5f3-44fd-a00a-602613380bcc" Apr 21 09:58:52.344779 containerd[1482]: time="2026-04-21T09:58:52.344696587Z" level=error msg="Failed to destroy network for sandbox \"b1d887e631030b9f9b8679ed34f334f785cab61d894b2adba1b41f0ca8515e35\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 09:58:52.345182 containerd[1482]: time="2026-04-21T09:58:52.345127134Z" level=error msg="encountered an error cleaning up failed sandbox \"b1d887e631030b9f9b8679ed34f334f785cab61d894b2adba1b41f0ca8515e35\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 09:58:52.345237 containerd[1482]: time="2026-04-21T09:58:52.345200273Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-hfxp8,Uid:cb94aaf5-d4ea-43a3-a742-62679734ea77,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b1d887e631030b9f9b8679ed34f334f785cab61d894b2adba1b41f0ca8515e35\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 09:58:52.345557 kubelet[2514]: E0421 09:58:52.345499 2514 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b1d887e631030b9f9b8679ed34f334f785cab61d894b2adba1b41f0ca8515e35\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 09:58:52.345622 kubelet[2514]: E0421 09:58:52.345567 2514 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b1d887e631030b9f9b8679ed34f334f785cab61d894b2adba1b41f0ca8515e35\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-5b85766d88-hfxp8" Apr 21 09:58:52.345622 kubelet[2514]: E0421 09:58:52.345589 2514 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b1d887e631030b9f9b8679ed34f334f785cab61d894b2adba1b41f0ca8515e35\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-5b85766d88-hfxp8" Apr 21 09:58:52.345702 kubelet[2514]: E0421 09:58:52.345640 2514 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-5b85766d88-hfxp8_calico-system(cb94aaf5-d4ea-43a3-a742-62679734ea77)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-5b85766d88-hfxp8_calico-system(cb94aaf5-d4ea-43a3-a742-62679734ea77)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b1d887e631030b9f9b8679ed34f334f785cab61d894b2adba1b41f0ca8515e35\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-5b85766d88-hfxp8" podUID="cb94aaf5-d4ea-43a3-a742-62679734ea77" Apr 21 09:58:52.388603 kubelet[2514]: I0421 09:58:52.387692 2514 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0aed76e5ecaf297b2736f954eb1d225b925d99cef754bef7e65c92a81b3d42da" Apr 21 09:58:52.389761 containerd[1482]: time="2026-04-21T09:58:52.389719299Z" level=info msg="StopPodSandbox for \"0aed76e5ecaf297b2736f954eb1d225b925d99cef754bef7e65c92a81b3d42da\"" Apr 21 09:58:52.390272 kubelet[2514]: I0421 09:58:52.390250 2514 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b118c496459eb514921cd9a74eb129d901e646ba194b375b30ab9fd23c588678" Apr 21 09:58:52.391061 containerd[1482]: time="2026-04-21T09:58:52.391026391Z" level=info msg="StopPodSandbox for \"b118c496459eb514921cd9a74eb129d901e646ba194b375b30ab9fd23c588678\"" Apr 21 09:58:52.391246 containerd[1482]: time="2026-04-21T09:58:52.391217664Z" level=info msg="Ensure that sandbox b118c496459eb514921cd9a74eb129d901e646ba194b375b30ab9fd23c588678 in task-service has been cleanup successfully" Apr 21 09:58:52.392221 containerd[1482]: time="2026-04-21T09:58:52.391837083Z" level=info msg="Ensure that sandbox 0aed76e5ecaf297b2736f954eb1d225b925d99cef754bef7e65c92a81b3d42da in task-service has been cleanup successfully" Apr 21 09:58:52.396313 kubelet[2514]: I0421 09:58:52.396267 2514 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d50ced9d536f67c6cec3a4c3bf4e9e3fbacbdfe0dc7f764a964d798a44792183" Apr 21 09:58:52.397563 containerd[1482]: time="2026-04-21T09:58:52.397153882Z" level=info msg="StopPodSandbox for \"d50ced9d536f67c6cec3a4c3bf4e9e3fbacbdfe0dc7f764a964d798a44792183\"" Apr 21 09:58:52.397563 containerd[1482]: time="2026-04-21T09:58:52.397351521Z" level=info msg="Ensure that sandbox d50ced9d536f67c6cec3a4c3bf4e9e3fbacbdfe0dc7f764a964d798a44792183 in task-service has been cleanup successfully" Apr 21 09:58:52.414716 kubelet[2514]: I0421 09:58:52.414670 2514 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b1d887e631030b9f9b8679ed34f334f785cab61d894b2adba1b41f0ca8515e35" Apr 21 09:58:52.415677 containerd[1482]: time="2026-04-21T09:58:52.415631471Z" level=info msg="StopPodSandbox for \"b1d887e631030b9f9b8679ed34f334f785cab61d894b2adba1b41f0ca8515e35\"" Apr 21 09:58:52.415892 containerd[1482]: time="2026-04-21T09:58:52.415865259Z" level=info msg="Ensure that sandbox b1d887e631030b9f9b8679ed34f334f785cab61d894b2adba1b41f0ca8515e35 in task-service has been cleanup successfully" Apr 21 09:58:52.428990 kubelet[2514]: I0421 09:58:52.428171 2514 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f53752a87b59767cedc982a405419339c8d99a7a047448cafe949623a67b2fa5" Apr 21 09:58:52.429437 containerd[1482]: time="2026-04-21T09:58:52.429360840Z" level=info msg="CreateContainer within sandbox \"fb1d76d34a95828f68da2fd659beb366e9567f3bd500fdae303a570d0185df94\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Apr 21 09:58:52.429722 containerd[1482]: time="2026-04-21T09:58:52.429702795Z" level=info msg="StopPodSandbox for \"f53752a87b59767cedc982a405419339c8d99a7a047448cafe949623a67b2fa5\"" Apr 21 09:58:52.430089 containerd[1482]: time="2026-04-21T09:58:52.430067088Z" level=info msg="Ensure that sandbox f53752a87b59767cedc982a405419339c8d99a7a047448cafe949623a67b2fa5 in task-service has been cleanup successfully" Apr 21 09:58:52.436163 kubelet[2514]: I0421 09:58:52.436076 2514 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="639a90f1031a191cb00e70a220d80a3b4b71dbf44d915b61414a094f3d38903f" Apr 21 09:58:52.443346 containerd[1482]: time="2026-04-21T09:58:52.442673273Z" level=info msg="StopPodSandbox for \"639a90f1031a191cb00e70a220d80a3b4b71dbf44d915b61414a094f3d38903f\"" Apr 21 09:58:52.443346 containerd[1482]: time="2026-04-21T09:58:52.442866228Z" level=info msg="Ensure that sandbox 639a90f1031a191cb00e70a220d80a3b4b71dbf44d915b61414a094f3d38903f in task-service has been cleanup successfully" Apr 21 09:58:52.454635 kubelet[2514]: I0421 09:58:52.453954 2514 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="884cd7957d9fda0434bd864702ea9779053a5698d779236dd1d3087362524ca8" Apr 21 09:58:52.460061 containerd[1482]: time="2026-04-21T09:58:52.460026397Z" level=info msg="StopPodSandbox for \"884cd7957d9fda0434bd864702ea9779053a5698d779236dd1d3087362524ca8\"" Apr 21 09:58:52.460369 containerd[1482]: time="2026-04-21T09:58:52.460348456Z" level=info msg="Ensure that sandbox 884cd7957d9fda0434bd864702ea9779053a5698d779236dd1d3087362524ca8 in task-service has been cleanup successfully" Apr 21 09:58:52.483954 containerd[1482]: time="2026-04-21T09:58:52.483893284Z" level=error msg="StopPodSandbox for \"b118c496459eb514921cd9a74eb129d901e646ba194b375b30ab9fd23c588678\" failed" error="failed to destroy network for sandbox \"b118c496459eb514921cd9a74eb129d901e646ba194b375b30ab9fd23c588678\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 09:58:52.484296 kubelet[2514]: E0421 09:58:52.484257 2514 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b118c496459eb514921cd9a74eb129d901e646ba194b375b30ab9fd23c588678\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b118c496459eb514921cd9a74eb129d901e646ba194b375b30ab9fd23c588678" Apr 21 09:58:52.484467 kubelet[2514]: E0421 09:58:52.484401 2514 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b118c496459eb514921cd9a74eb129d901e646ba194b375b30ab9fd23c588678"} Apr 21 09:58:52.484559 kubelet[2514]: E0421 09:58:52.484547 2514 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"6c6d8cb6-7cee-40f4-a210-b9a1efc80ea9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b118c496459eb514921cd9a74eb129d901e646ba194b375b30ab9fd23c588678\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 21 09:58:52.484762 kubelet[2514]: E0421 09:58:52.484728 2514 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"6c6d8cb6-7cee-40f4-a210-b9a1efc80ea9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b118c496459eb514921cd9a74eb129d901e646ba194b375b30ab9fd23c588678\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-b79cb9f78-7tssf" podUID="6c6d8cb6-7cee-40f4-a210-b9a1efc80ea9" Apr 21 09:58:52.488103 containerd[1482]: time="2026-04-21T09:58:52.488063199Z" level=info msg="CreateContainer within sandbox \"fb1d76d34a95828f68da2fd659beb366e9567f3bd500fdae303a570d0185df94\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"7fa507c115c5f570627d21dc13cf478197540d3732a71185815b79b872d70e75\"" Apr 21 09:58:52.489631 containerd[1482]: time="2026-04-21T09:58:52.489603959Z" level=info msg="StartContainer for \"7fa507c115c5f570627d21dc13cf478197540d3732a71185815b79b872d70e75\"" Apr 21 09:58:52.528487 containerd[1482]: time="2026-04-21T09:58:52.528425921Z" level=error msg="StopPodSandbox for \"884cd7957d9fda0434bd864702ea9779053a5698d779236dd1d3087362524ca8\" failed" error="failed to destroy network for sandbox \"884cd7957d9fda0434bd864702ea9779053a5698d779236dd1d3087362524ca8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 09:58:52.532322 kubelet[2514]: E0421 09:58:52.532259 2514 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"884cd7957d9fda0434bd864702ea9779053a5698d779236dd1d3087362524ca8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="884cd7957d9fda0434bd864702ea9779053a5698d779236dd1d3087362524ca8" Apr 21 09:58:52.532443 kubelet[2514]: E0421 09:58:52.532332 2514 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"884cd7957d9fda0434bd864702ea9779053a5698d779236dd1d3087362524ca8"} Apr 21 09:58:52.532443 kubelet[2514]: E0421 09:58:52.532375 2514 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b02e60bd-7bf5-4d67-90f0-2644154eb8f8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"884cd7957d9fda0434bd864702ea9779053a5698d779236dd1d3087362524ca8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 21 09:58:52.532443 kubelet[2514]: E0421 09:58:52.532407 2514 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b02e60bd-7bf5-4d67-90f0-2644154eb8f8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"884cd7957d9fda0434bd864702ea9779053a5698d779236dd1d3087362524ca8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-nm2rg" podUID="b02e60bd-7bf5-4d67-90f0-2644154eb8f8" Apr 21 09:58:52.558295 containerd[1482]: time="2026-04-21T09:58:52.558233308Z" level=error msg="StopPodSandbox for \"b1d887e631030b9f9b8679ed34f334f785cab61d894b2adba1b41f0ca8515e35\" failed" error="failed to destroy network for sandbox \"b1d887e631030b9f9b8679ed34f334f785cab61d894b2adba1b41f0ca8515e35\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 09:58:52.558833 kubelet[2514]: E0421 09:58:52.558486 2514 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b1d887e631030b9f9b8679ed34f334f785cab61d894b2adba1b41f0ca8515e35\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b1d887e631030b9f9b8679ed34f334f785cab61d894b2adba1b41f0ca8515e35" Apr 21 09:58:52.558833 kubelet[2514]: E0421 09:58:52.558557 2514 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b1d887e631030b9f9b8679ed34f334f785cab61d894b2adba1b41f0ca8515e35"} Apr 21 09:58:52.558833 kubelet[2514]: E0421 09:58:52.558589 2514 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"cb94aaf5-d4ea-43a3-a742-62679734ea77\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b1d887e631030b9f9b8679ed34f334f785cab61d894b2adba1b41f0ca8515e35\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 21 09:58:52.558833 kubelet[2514]: E0421 09:58:52.558610 2514 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"cb94aaf5-d4ea-43a3-a742-62679734ea77\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b1d887e631030b9f9b8679ed34f334f785cab61d894b2adba1b41f0ca8515e35\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-5b85766d88-hfxp8" podUID="cb94aaf5-d4ea-43a3-a742-62679734ea77" Apr 21 09:58:52.565307 containerd[1482]: time="2026-04-21T09:58:52.565197632Z" level=error msg="StopPodSandbox for \"d50ced9d536f67c6cec3a4c3bf4e9e3fbacbdfe0dc7f764a964d798a44792183\" failed" error="failed to destroy network for sandbox \"d50ced9d536f67c6cec3a4c3bf4e9e3fbacbdfe0dc7f764a964d798a44792183\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 09:58:52.565543 kubelet[2514]: E0421 09:58:52.565479 2514 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d50ced9d536f67c6cec3a4c3bf4e9e3fbacbdfe0dc7f764a964d798a44792183\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d50ced9d536f67c6cec3a4c3bf4e9e3fbacbdfe0dc7f764a964d798a44792183" Apr 21 09:58:52.565635 kubelet[2514]: E0421 09:58:52.565547 2514 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d50ced9d536f67c6cec3a4c3bf4e9e3fbacbdfe0dc7f764a964d798a44792183"} Apr 21 09:58:52.565635 kubelet[2514]: E0421 09:58:52.565577 2514 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"076dd66b-5522-4ecd-a955-84e896c699d3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d50ced9d536f67c6cec3a4c3bf4e9e3fbacbdfe0dc7f764a964d798a44792183\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 21 09:58:52.565635 kubelet[2514]: E0421 09:58:52.565601 2514 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"076dd66b-5522-4ecd-a955-84e896c699d3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d50ced9d536f67c6cec3a4c3bf4e9e3fbacbdfe0dc7f764a964d798a44792183\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-cfd49cd7d-twdnt" podUID="076dd66b-5522-4ecd-a955-84e896c699d3" Apr 21 09:58:52.569332 containerd[1482]: time="2026-04-21T09:58:52.569134840Z" level=error msg="StopPodSandbox for \"0aed76e5ecaf297b2736f954eb1d225b925d99cef754bef7e65c92a81b3d42da\" failed" error="failed to destroy network for sandbox \"0aed76e5ecaf297b2736f954eb1d225b925d99cef754bef7e65c92a81b3d42da\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 09:58:52.569632 kubelet[2514]: E0421 09:58:52.569558 2514 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"0aed76e5ecaf297b2736f954eb1d225b925d99cef754bef7e65c92a81b3d42da\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="0aed76e5ecaf297b2736f954eb1d225b925d99cef754bef7e65c92a81b3d42da" Apr 21 09:58:52.570460 kubelet[2514]: E0421 09:58:52.569796 2514 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"0aed76e5ecaf297b2736f954eb1d225b925d99cef754bef7e65c92a81b3d42da"} Apr 21 09:58:52.570460 kubelet[2514]: E0421 09:58:52.569849 2514 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"43fc62f9-80c2-420c-b7a3-69da00af4f8f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0aed76e5ecaf297b2736f954eb1d225b925d99cef754bef7e65c92a81b3d42da\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 21 09:58:52.570460 kubelet[2514]: E0421 09:58:52.569872 2514 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"43fc62f9-80c2-420c-b7a3-69da00af4f8f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0aed76e5ecaf297b2736f954eb1d225b925d99cef754bef7e65c92a81b3d42da\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-b79cb9f78-rwr6l" podUID="43fc62f9-80c2-420c-b7a3-69da00af4f8f" Apr 21 09:58:52.576394 containerd[1482]: time="2026-04-21T09:58:52.576342481Z" level=error msg="StopPodSandbox for \"639a90f1031a191cb00e70a220d80a3b4b71dbf44d915b61414a094f3d38903f\" failed" error="failed to destroy network for sandbox \"639a90f1031a191cb00e70a220d80a3b4b71dbf44d915b61414a094f3d38903f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 09:58:52.577235 kubelet[2514]: E0421 09:58:52.577105 2514 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"639a90f1031a191cb00e70a220d80a3b4b71dbf44d915b61414a094f3d38903f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="639a90f1031a191cb00e70a220d80a3b4b71dbf44d915b61414a094f3d38903f" Apr 21 09:58:52.577235 kubelet[2514]: E0421 09:58:52.577150 2514 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"639a90f1031a191cb00e70a220d80a3b4b71dbf44d915b61414a094f3d38903f"} Apr 21 09:58:52.577235 kubelet[2514]: E0421 09:58:52.577183 2514 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7106ec2c-c759-48f7-8dd6-aae54768ee28\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"639a90f1031a191cb00e70a220d80a3b4b71dbf44d915b61414a094f3d38903f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 21 09:58:52.577235 kubelet[2514]: E0421 09:58:52.577204 2514 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7106ec2c-c759-48f7-8dd6-aae54768ee28\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"639a90f1031a191cb00e70a220d80a3b4b71dbf44d915b61414a094f3d38903f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-598qp" podUID="7106ec2c-c759-48f7-8dd6-aae54768ee28" Apr 21 09:58:52.580005 systemd[1]: Started cri-containerd-7fa507c115c5f570627d21dc13cf478197540d3732a71185815b79b872d70e75.scope - libcontainer container 7fa507c115c5f570627d21dc13cf478197540d3732a71185815b79b872d70e75. Apr 21 09:58:52.583673 containerd[1482]: time="2026-04-21T09:58:52.583086108Z" level=error msg="StopPodSandbox for \"f53752a87b59767cedc982a405419339c8d99a7a047448cafe949623a67b2fa5\" failed" error="failed to destroy network for sandbox \"f53752a87b59767cedc982a405419339c8d99a7a047448cafe949623a67b2fa5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 09:58:52.584003 kubelet[2514]: E0421 09:58:52.583968 2514 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f53752a87b59767cedc982a405419339c8d99a7a047448cafe949623a67b2fa5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f53752a87b59767cedc982a405419339c8d99a7a047448cafe949623a67b2fa5" Apr 21 09:58:52.584246 kubelet[2514]: E0421 09:58:52.584216 2514 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"f53752a87b59767cedc982a405419339c8d99a7a047448cafe949623a67b2fa5"} Apr 21 09:58:52.584504 kubelet[2514]: E0421 09:58:52.584377 2514 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"3ec3bce5-d5f3-44fd-a00a-602613380bcc\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f53752a87b59767cedc982a405419339c8d99a7a047448cafe949623a67b2fa5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 21 09:58:52.584804 kubelet[2514]: E0421 09:58:52.584405 2514 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"3ec3bce5-d5f3-44fd-a00a-602613380bcc\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f53752a87b59767cedc982a405419339c8d99a7a047448cafe949623a67b2fa5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-98d5c77f5-775kb" podUID="3ec3bce5-d5f3-44fd-a00a-602613380bcc" Apr 21 09:58:52.613935 containerd[1482]: time="2026-04-21T09:58:52.613892819Z" level=info msg="StartContainer for \"7fa507c115c5f570627d21dc13cf478197540d3732a71185815b79b872d70e75\" returns successfully" Apr 21 09:58:53.237200 systemd[1]: Created slice kubepods-besteffort-pod8e8c68fe_3eb8_447c_8335_6bfebeaa92f8.slice - libcontainer container kubepods-besteffort-pod8e8c68fe_3eb8_447c_8335_6bfebeaa92f8.slice. Apr 21 09:58:53.240188 containerd[1482]: time="2026-04-21T09:58:53.240077765Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-lnmtt,Uid:8e8c68fe-3eb8-447c-8335-6bfebeaa92f8,Namespace:calico-system,Attempt:0,}" Apr 21 09:58:53.422280 systemd-networkd[1366]: caliccb6418dc71: Link UP Apr 21 09:58:53.422495 systemd-networkd[1366]: caliccb6418dc71: Gained carrier Apr 21 09:58:53.440124 containerd[1482]: 2026-04-21 09:58:53.275 [ERROR][3753] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 21 09:58:53.440124 containerd[1482]: 2026-04-21 09:58:53.306 [INFO][3753] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--7--d--6a70a4c656-k8s-csi--node--driver--lnmtt-eth0 csi-node-driver- calico-system 8e8c68fe-3eb8-447c-8335-6bfebeaa92f8 731 0 2026-04-21 09:58:37 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6d9d697c7c k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081-3-7-d-6a70a4c656 csi-node-driver-lnmtt eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] caliccb6418dc71 [] [] }} ContainerID="f5bbeee58cfd9779837bb660260b208ac4b6451f4bfad2d068d0b89000428bec" Namespace="calico-system" Pod="csi-node-driver-lnmtt" WorkloadEndpoint="ci--4081--3--7--d--6a70a4c656-k8s-csi--node--driver--lnmtt-" Apr 21 09:58:53.440124 containerd[1482]: 2026-04-21 09:58:53.306 [INFO][3753] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f5bbeee58cfd9779837bb660260b208ac4b6451f4bfad2d068d0b89000428bec" Namespace="calico-system" Pod="csi-node-driver-lnmtt" WorkloadEndpoint="ci--4081--3--7--d--6a70a4c656-k8s-csi--node--driver--lnmtt-eth0" Apr 21 09:58:53.440124 containerd[1482]: 2026-04-21 09:58:53.354 [INFO][3762] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f5bbeee58cfd9779837bb660260b208ac4b6451f4bfad2d068d0b89000428bec" HandleID="k8s-pod-network.f5bbeee58cfd9779837bb660260b208ac4b6451f4bfad2d068d0b89000428bec" Workload="ci--4081--3--7--d--6a70a4c656-k8s-csi--node--driver--lnmtt-eth0" Apr 21 09:58:53.440124 containerd[1482]: 2026-04-21 09:58:53.367 [INFO][3762] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="f5bbeee58cfd9779837bb660260b208ac4b6451f4bfad2d068d0b89000428bec" HandleID="k8s-pod-network.f5bbeee58cfd9779837bb660260b208ac4b6451f4bfad2d068d0b89000428bec" Workload="ci--4081--3--7--d--6a70a4c656-k8s-csi--node--driver--lnmtt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002734e0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-7-d-6a70a4c656", "pod":"csi-node-driver-lnmtt", "timestamp":"2026-04-21 09:58:53.35418412 +0000 UTC"}, Hostname:"ci-4081-3-7-d-6a70a4c656", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x4000332f20)} Apr 21 09:58:53.440124 containerd[1482]: 2026-04-21 09:58:53.367 [INFO][3762] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 09:58:53.440124 containerd[1482]: 2026-04-21 09:58:53.368 [INFO][3762] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 09:58:53.440124 containerd[1482]: 2026-04-21 09:58:53.369 [INFO][3762] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-7-d-6a70a4c656' Apr 21 09:58:53.440124 containerd[1482]: 2026-04-21 09:58:53.372 [INFO][3762] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.f5bbeee58cfd9779837bb660260b208ac4b6451f4bfad2d068d0b89000428bec" host="ci-4081-3-7-d-6a70a4c656" Apr 21 09:58:53.440124 containerd[1482]: 2026-04-21 09:58:53.379 [INFO][3762] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081-3-7-d-6a70a4c656" Apr 21 09:58:53.440124 containerd[1482]: 2026-04-21 09:58:53.385 [INFO][3762] ipam/ipam.go 526: Trying affinity for 192.168.22.192/26 host="ci-4081-3-7-d-6a70a4c656" Apr 21 09:58:53.440124 containerd[1482]: 2026-04-21 09:58:53.388 [INFO][3762] ipam/ipam.go 160: Attempting to load block cidr=192.168.22.192/26 host="ci-4081-3-7-d-6a70a4c656" Apr 21 09:58:53.440124 containerd[1482]: 2026-04-21 09:58:53.390 [INFO][3762] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.22.192/26 host="ci-4081-3-7-d-6a70a4c656" Apr 21 09:58:53.440124 containerd[1482]: 2026-04-21 09:58:53.390 [INFO][3762] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.22.192/26 handle="k8s-pod-network.f5bbeee58cfd9779837bb660260b208ac4b6451f4bfad2d068d0b89000428bec" host="ci-4081-3-7-d-6a70a4c656" Apr 21 09:58:53.440124 containerd[1482]: 2026-04-21 09:58:53.394 [INFO][3762] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.f5bbeee58cfd9779837bb660260b208ac4b6451f4bfad2d068d0b89000428bec Apr 21 09:58:53.440124 containerd[1482]: 2026-04-21 09:58:53.398 [INFO][3762] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.22.192/26 handle="k8s-pod-network.f5bbeee58cfd9779837bb660260b208ac4b6451f4bfad2d068d0b89000428bec" host="ci-4081-3-7-d-6a70a4c656" Apr 21 09:58:53.440124 containerd[1482]: 2026-04-21 09:58:53.406 [INFO][3762] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.22.193/26] block=192.168.22.192/26 handle="k8s-pod-network.f5bbeee58cfd9779837bb660260b208ac4b6451f4bfad2d068d0b89000428bec" host="ci-4081-3-7-d-6a70a4c656" Apr 21 09:58:53.440124 containerd[1482]: 2026-04-21 09:58:53.406 [INFO][3762] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.22.193/26] handle="k8s-pod-network.f5bbeee58cfd9779837bb660260b208ac4b6451f4bfad2d068d0b89000428bec" host="ci-4081-3-7-d-6a70a4c656" Apr 21 09:58:53.440124 containerd[1482]: 2026-04-21 09:58:53.406 [INFO][3762] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 09:58:53.440124 containerd[1482]: 2026-04-21 09:58:53.406 [INFO][3762] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.22.193/26] IPv6=[] ContainerID="f5bbeee58cfd9779837bb660260b208ac4b6451f4bfad2d068d0b89000428bec" HandleID="k8s-pod-network.f5bbeee58cfd9779837bb660260b208ac4b6451f4bfad2d068d0b89000428bec" Workload="ci--4081--3--7--d--6a70a4c656-k8s-csi--node--driver--lnmtt-eth0" Apr 21 09:58:53.441291 containerd[1482]: 2026-04-21 09:58:53.409 [INFO][3753] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f5bbeee58cfd9779837bb660260b208ac4b6451f4bfad2d068d0b89000428bec" Namespace="calico-system" Pod="csi-node-driver-lnmtt" WorkloadEndpoint="ci--4081--3--7--d--6a70a4c656-k8s-csi--node--driver--lnmtt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--d--6a70a4c656-k8s-csi--node--driver--lnmtt-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"8e8c68fe-3eb8-447c-8335-6bfebeaa92f8", ResourceVersion:"731", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 9, 58, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-d-6a70a4c656", ContainerID:"", Pod:"csi-node-driver-lnmtt", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.22.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"caliccb6418dc71", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 09:58:53.441291 containerd[1482]: 2026-04-21 09:58:53.410 [INFO][3753] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.22.193/32] ContainerID="f5bbeee58cfd9779837bb660260b208ac4b6451f4bfad2d068d0b89000428bec" Namespace="calico-system" Pod="csi-node-driver-lnmtt" WorkloadEndpoint="ci--4081--3--7--d--6a70a4c656-k8s-csi--node--driver--lnmtt-eth0" Apr 21 09:58:53.441291 containerd[1482]: 2026-04-21 09:58:53.410 [INFO][3753] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliccb6418dc71 ContainerID="f5bbeee58cfd9779837bb660260b208ac4b6451f4bfad2d068d0b89000428bec" Namespace="calico-system" Pod="csi-node-driver-lnmtt" WorkloadEndpoint="ci--4081--3--7--d--6a70a4c656-k8s-csi--node--driver--lnmtt-eth0" Apr 21 09:58:53.441291 containerd[1482]: 2026-04-21 09:58:53.423 [INFO][3753] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f5bbeee58cfd9779837bb660260b208ac4b6451f4bfad2d068d0b89000428bec" Namespace="calico-system" Pod="csi-node-driver-lnmtt" WorkloadEndpoint="ci--4081--3--7--d--6a70a4c656-k8s-csi--node--driver--lnmtt-eth0" Apr 21 09:58:53.441291 containerd[1482]: 2026-04-21 09:58:53.423 [INFO][3753] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f5bbeee58cfd9779837bb660260b208ac4b6451f4bfad2d068d0b89000428bec" Namespace="calico-system" Pod="csi-node-driver-lnmtt" WorkloadEndpoint="ci--4081--3--7--d--6a70a4c656-k8s-csi--node--driver--lnmtt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--d--6a70a4c656-k8s-csi--node--driver--lnmtt-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"8e8c68fe-3eb8-447c-8335-6bfebeaa92f8", ResourceVersion:"731", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 9, 58, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-d-6a70a4c656", ContainerID:"f5bbeee58cfd9779837bb660260b208ac4b6451f4bfad2d068d0b89000428bec", Pod:"csi-node-driver-lnmtt", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.22.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"caliccb6418dc71", MAC:"da:71:8b:0e:20:d2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 09:58:53.441291 containerd[1482]: 2026-04-21 09:58:53.437 [INFO][3753] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f5bbeee58cfd9779837bb660260b208ac4b6451f4bfad2d068d0b89000428bec" Namespace="calico-system" Pod="csi-node-driver-lnmtt" WorkloadEndpoint="ci--4081--3--7--d--6a70a4c656-k8s-csi--node--driver--lnmtt-eth0" Apr 21 09:58:53.461957 containerd[1482]: time="2026-04-21T09:58:53.461620418Z" level=info msg="StopPodSandbox for \"f53752a87b59767cedc982a405419339c8d99a7a047448cafe949623a67b2fa5\"" Apr 21 09:58:53.462564 containerd[1482]: time="2026-04-21T09:58:53.462217039Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 09:58:53.462564 containerd[1482]: time="2026-04-21T09:58:53.462284206Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 09:58:53.462564 containerd[1482]: time="2026-04-21T09:58:53.462306341Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 09:58:53.462872 containerd[1482]: time="2026-04-21T09:58:53.462410775Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 09:58:53.497407 kubelet[2514]: I0421 09:58:53.497159 2514 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-qgdvb" podStartSLOduration=3.6305171769999998 podStartE2EDuration="16.497028753s" podCreationTimestamp="2026-04-21 09:58:37 +0000 UTC" firstStartedPulling="2026-04-21 09:58:38.000297676 +0000 UTC m=+22.874047557" lastFinishedPulling="2026-04-21 09:58:50.866809252 +0000 UTC m=+35.740559133" observedRunningTime="2026-04-21 09:58:53.496447024 +0000 UTC m=+38.370196905" watchObservedRunningTime="2026-04-21 09:58:53.497028753 +0000 UTC m=+38.370778634" Apr 21 09:58:53.502316 systemd[1]: Started cri-containerd-f5bbeee58cfd9779837bb660260b208ac4b6451f4bfad2d068d0b89000428bec.scope - libcontainer container f5bbeee58cfd9779837bb660260b208ac4b6451f4bfad2d068d0b89000428bec. Apr 21 09:58:53.536001 containerd[1482]: time="2026-04-21T09:58:53.535950242Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-lnmtt,Uid:8e8c68fe-3eb8-447c-8335-6bfebeaa92f8,Namespace:calico-system,Attempt:0,} returns sandbox id \"f5bbeee58cfd9779837bb660260b208ac4b6451f4bfad2d068d0b89000428bec\"" Apr 21 09:58:53.538320 containerd[1482]: time="2026-04-21T09:58:53.538285687Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\"" Apr 21 09:58:53.598410 containerd[1482]: 2026-04-21 09:58:53.554 [INFO][3809] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="f53752a87b59767cedc982a405419339c8d99a7a047448cafe949623a67b2fa5" Apr 21 09:58:53.598410 containerd[1482]: 2026-04-21 09:58:53.555 [INFO][3809] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="f53752a87b59767cedc982a405419339c8d99a7a047448cafe949623a67b2fa5" iface="eth0" netns="/var/run/netns/cni-88d4d338-eea5-3c13-5e30-0f0719dba633" Apr 21 09:58:53.598410 containerd[1482]: 2026-04-21 09:58:53.555 [INFO][3809] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="f53752a87b59767cedc982a405419339c8d99a7a047448cafe949623a67b2fa5" iface="eth0" netns="/var/run/netns/cni-88d4d338-eea5-3c13-5e30-0f0719dba633" Apr 21 09:58:53.598410 containerd[1482]: 2026-04-21 09:58:53.555 [INFO][3809] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="f53752a87b59767cedc982a405419339c8d99a7a047448cafe949623a67b2fa5" iface="eth0" netns="/var/run/netns/cni-88d4d338-eea5-3c13-5e30-0f0719dba633" Apr 21 09:58:53.598410 containerd[1482]: 2026-04-21 09:58:53.555 [INFO][3809] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="f53752a87b59767cedc982a405419339c8d99a7a047448cafe949623a67b2fa5" Apr 21 09:58:53.598410 containerd[1482]: 2026-04-21 09:58:53.555 [INFO][3809] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="f53752a87b59767cedc982a405419339c8d99a7a047448cafe949623a67b2fa5" Apr 21 09:58:53.598410 containerd[1482]: 2026-04-21 09:58:53.582 [INFO][3832] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="f53752a87b59767cedc982a405419339c8d99a7a047448cafe949623a67b2fa5" HandleID="k8s-pod-network.f53752a87b59767cedc982a405419339c8d99a7a047448cafe949623a67b2fa5" Workload="ci--4081--3--7--d--6a70a4c656-k8s-whisker--98d5c77f5--775kb-eth0" Apr 21 09:58:53.598410 containerd[1482]: 2026-04-21 09:58:53.582 [INFO][3832] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 09:58:53.598410 containerd[1482]: 2026-04-21 09:58:53.582 [INFO][3832] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 09:58:53.598410 containerd[1482]: 2026-04-21 09:58:53.592 [WARNING][3832] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="f53752a87b59767cedc982a405419339c8d99a7a047448cafe949623a67b2fa5" HandleID="k8s-pod-network.f53752a87b59767cedc982a405419339c8d99a7a047448cafe949623a67b2fa5" Workload="ci--4081--3--7--d--6a70a4c656-k8s-whisker--98d5c77f5--775kb-eth0" Apr 21 09:58:53.598410 containerd[1482]: 2026-04-21 09:58:53.592 [INFO][3832] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="f53752a87b59767cedc982a405419339c8d99a7a047448cafe949623a67b2fa5" HandleID="k8s-pod-network.f53752a87b59767cedc982a405419339c8d99a7a047448cafe949623a67b2fa5" Workload="ci--4081--3--7--d--6a70a4c656-k8s-whisker--98d5c77f5--775kb-eth0" Apr 21 09:58:53.598410 containerd[1482]: 2026-04-21 09:58:53.594 [INFO][3832] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 09:58:53.598410 containerd[1482]: 2026-04-21 09:58:53.596 [INFO][3809] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="f53752a87b59767cedc982a405419339c8d99a7a047448cafe949623a67b2fa5" Apr 21 09:58:53.602035 containerd[1482]: time="2026-04-21T09:58:53.600906546Z" level=info msg="TearDown network for sandbox \"f53752a87b59767cedc982a405419339c8d99a7a047448cafe949623a67b2fa5\" successfully" Apr 21 09:58:53.602035 containerd[1482]: time="2026-04-21T09:58:53.600947094Z" level=info msg="StopPodSandbox for \"f53752a87b59767cedc982a405419339c8d99a7a047448cafe949623a67b2fa5\" returns successfully" Apr 21 09:58:53.602640 systemd[1]: run-netns-cni\x2d88d4d338\x2deea5\x2d3c13\x2d5e30\x2d0f0719dba633.mount: Deactivated successfully. Apr 21 09:58:53.719110 kubelet[2514]: I0421 09:58:53.718987 2514 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3ec3bce5-d5f3-44fd-a00a-602613380bcc-whisker-ca-bundle\") pod \"3ec3bce5-d5f3-44fd-a00a-602613380bcc\" (UID: \"3ec3bce5-d5f3-44fd-a00a-602613380bcc\") " Apr 21 09:58:53.719110 kubelet[2514]: I0421 09:58:53.719070 2514 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vpfjj\" (UniqueName: \"kubernetes.io/projected/3ec3bce5-d5f3-44fd-a00a-602613380bcc-kube-api-access-vpfjj\") pod \"3ec3bce5-d5f3-44fd-a00a-602613380bcc\" (UID: \"3ec3bce5-d5f3-44fd-a00a-602613380bcc\") " Apr 21 09:58:53.719364 kubelet[2514]: I0421 09:58:53.719125 2514 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/3ec3bce5-d5f3-44fd-a00a-602613380bcc-whisker-backend-key-pair\") pod \"3ec3bce5-d5f3-44fd-a00a-602613380bcc\" (UID: \"3ec3bce5-d5f3-44fd-a00a-602613380bcc\") " Apr 21 09:58:53.719364 kubelet[2514]: I0421 09:58:53.719166 2514 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/3ec3bce5-d5f3-44fd-a00a-602613380bcc-nginx-config\") pod \"3ec3bce5-d5f3-44fd-a00a-602613380bcc\" (UID: \"3ec3bce5-d5f3-44fd-a00a-602613380bcc\") " Apr 21 09:58:53.719850 kubelet[2514]: I0421 09:58:53.719767 2514 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3ec3bce5-d5f3-44fd-a00a-602613380bcc-nginx-config" (OuterVolumeSpecName: "nginx-config") pod "3ec3bce5-d5f3-44fd-a00a-602613380bcc" (UID: "3ec3bce5-d5f3-44fd-a00a-602613380bcc"). InnerVolumeSpecName "nginx-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 21 09:58:53.720314 kubelet[2514]: I0421 09:58:53.720247 2514 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3ec3bce5-d5f3-44fd-a00a-602613380bcc-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "3ec3bce5-d5f3-44fd-a00a-602613380bcc" (UID: "3ec3bce5-d5f3-44fd-a00a-602613380bcc"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 21 09:58:53.724433 kubelet[2514]: I0421 09:58:53.724402 2514 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ec3bce5-d5f3-44fd-a00a-602613380bcc-kube-api-access-vpfjj" (OuterVolumeSpecName: "kube-api-access-vpfjj") pod "3ec3bce5-d5f3-44fd-a00a-602613380bcc" (UID: "3ec3bce5-d5f3-44fd-a00a-602613380bcc"). InnerVolumeSpecName "kube-api-access-vpfjj". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 21 09:58:53.724532 kubelet[2514]: I0421 09:58:53.724478 2514 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ec3bce5-d5f3-44fd-a00a-602613380bcc-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "3ec3bce5-d5f3-44fd-a00a-602613380bcc" (UID: "3ec3bce5-d5f3-44fd-a00a-602613380bcc"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 21 09:58:53.820927 kubelet[2514]: I0421 09:58:53.820074 2514 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3ec3bce5-d5f3-44fd-a00a-602613380bcc-whisker-ca-bundle\") on node \"ci-4081-3-7-d-6a70a4c656\" DevicePath \"\"" Apr 21 09:58:53.820927 kubelet[2514]: I0421 09:58:53.820123 2514 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-vpfjj\" (UniqueName: \"kubernetes.io/projected/3ec3bce5-d5f3-44fd-a00a-602613380bcc-kube-api-access-vpfjj\") on node \"ci-4081-3-7-d-6a70a4c656\" DevicePath \"\"" Apr 21 09:58:53.820927 kubelet[2514]: I0421 09:58:53.820144 2514 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/3ec3bce5-d5f3-44fd-a00a-602613380bcc-whisker-backend-key-pair\") on node \"ci-4081-3-7-d-6a70a4c656\" DevicePath \"\"" Apr 21 09:58:53.820927 kubelet[2514]: I0421 09:58:53.820166 2514 reconciler_common.go:299] "Volume detached for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/3ec3bce5-d5f3-44fd-a00a-602613380bcc-nginx-config\") on node \"ci-4081-3-7-d-6a70a4c656\" DevicePath \"\"" Apr 21 09:58:53.888168 systemd[1]: var-lib-kubelet-pods-3ec3bce5\x2dd5f3\x2d44fd\x2da00a\x2d602613380bcc-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dvpfjj.mount: Deactivated successfully. Apr 21 09:58:53.888282 systemd[1]: var-lib-kubelet-pods-3ec3bce5\x2dd5f3\x2d44fd\x2da00a\x2d602613380bcc-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Apr 21 09:58:54.403879 kernel: calico-node[3852]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Apr 21 09:58:54.473383 kubelet[2514]: I0421 09:58:54.473076 2514 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 21 09:58:54.482004 systemd[1]: Removed slice kubepods-besteffort-pod3ec3bce5_d5f3_44fd_a00a_602613380bcc.slice - libcontainer container kubepods-besteffort-pod3ec3bce5_d5f3_44fd_a00a_602613380bcc.slice. Apr 21 09:58:54.604359 systemd[1]: Created slice kubepods-besteffort-podc8040323_2a93_42ca_bcc6_9d8f5a40af9a.slice - libcontainer container kubepods-besteffort-podc8040323_2a93_42ca_bcc6_9d8f5a40af9a.slice. Apr 21 09:58:54.613963 systemd-networkd[1366]: caliccb6418dc71: Gained IPv6LL Apr 21 09:58:54.632440 kubelet[2514]: I0421 09:58:54.632224 2514 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/c8040323-2a93-42ca-bcc6-9d8f5a40af9a-whisker-backend-key-pair\") pod \"whisker-64c5644fb-vnkl9\" (UID: \"c8040323-2a93-42ca-bcc6-9d8f5a40af9a\") " pod="calico-system/whisker-64c5644fb-vnkl9" Apr 21 09:58:54.632440 kubelet[2514]: I0421 09:58:54.632292 2514 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8brt8\" (UniqueName: \"kubernetes.io/projected/c8040323-2a93-42ca-bcc6-9d8f5a40af9a-kube-api-access-8brt8\") pod \"whisker-64c5644fb-vnkl9\" (UID: \"c8040323-2a93-42ca-bcc6-9d8f5a40af9a\") " pod="calico-system/whisker-64c5644fb-vnkl9" Apr 21 09:58:54.632440 kubelet[2514]: I0421 09:58:54.632348 2514 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c8040323-2a93-42ca-bcc6-9d8f5a40af9a-whisker-ca-bundle\") pod \"whisker-64c5644fb-vnkl9\" (UID: \"c8040323-2a93-42ca-bcc6-9d8f5a40af9a\") " pod="calico-system/whisker-64c5644fb-vnkl9" Apr 21 09:58:54.632440 kubelet[2514]: I0421 09:58:54.632366 2514 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/c8040323-2a93-42ca-bcc6-9d8f5a40af9a-nginx-config\") pod \"whisker-64c5644fb-vnkl9\" (UID: \"c8040323-2a93-42ca-bcc6-9d8f5a40af9a\") " pod="calico-system/whisker-64c5644fb-vnkl9" Apr 21 09:58:54.855506 systemd-networkd[1366]: vxlan.calico: Link UP Apr 21 09:58:54.855514 systemd-networkd[1366]: vxlan.calico: Gained carrier Apr 21 09:58:54.913047 containerd[1482]: time="2026-04-21T09:58:54.912965271Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-64c5644fb-vnkl9,Uid:c8040323-2a93-42ca-bcc6-9d8f5a40af9a,Namespace:calico-system,Attempt:0,}" Apr 21 09:58:55.088992 systemd-networkd[1366]: calieb8bd3ba7c4: Link UP Apr 21 09:58:55.091697 systemd-networkd[1366]: calieb8bd3ba7c4: Gained carrier Apr 21 09:58:55.112411 containerd[1482]: 2026-04-21 09:58:54.979 [INFO][4008] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--7--d--6a70a4c656-k8s-whisker--64c5644fb--vnkl9-eth0 whisker-64c5644fb- calico-system c8040323-2a93-42ca-bcc6-9d8f5a40af9a 931 0 2026-04-21 09:58:54 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:64c5644fb projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4081-3-7-d-6a70a4c656 whisker-64c5644fb-vnkl9 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calieb8bd3ba7c4 [] [] }} ContainerID="3de2c350a1e9f18b6d16661c340b4f9fc3ac38770c5eee994b0eee1ad1a7dcbd" Namespace="calico-system" Pod="whisker-64c5644fb-vnkl9" WorkloadEndpoint="ci--4081--3--7--d--6a70a4c656-k8s-whisker--64c5644fb--vnkl9-" Apr 21 09:58:55.112411 containerd[1482]: 2026-04-21 09:58:54.979 [INFO][4008] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3de2c350a1e9f18b6d16661c340b4f9fc3ac38770c5eee994b0eee1ad1a7dcbd" Namespace="calico-system" Pod="whisker-64c5644fb-vnkl9" WorkloadEndpoint="ci--4081--3--7--d--6a70a4c656-k8s-whisker--64c5644fb--vnkl9-eth0" Apr 21 09:58:55.112411 containerd[1482]: 2026-04-21 09:58:55.012 [INFO][4019] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3de2c350a1e9f18b6d16661c340b4f9fc3ac38770c5eee994b0eee1ad1a7dcbd" HandleID="k8s-pod-network.3de2c350a1e9f18b6d16661c340b4f9fc3ac38770c5eee994b0eee1ad1a7dcbd" Workload="ci--4081--3--7--d--6a70a4c656-k8s-whisker--64c5644fb--vnkl9-eth0" Apr 21 09:58:55.112411 containerd[1482]: 2026-04-21 09:58:55.023 [INFO][4019] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="3de2c350a1e9f18b6d16661c340b4f9fc3ac38770c5eee994b0eee1ad1a7dcbd" HandleID="k8s-pod-network.3de2c350a1e9f18b6d16661c340b4f9fc3ac38770c5eee994b0eee1ad1a7dcbd" Workload="ci--4081--3--7--d--6a70a4c656-k8s-whisker--64c5644fb--vnkl9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002ed4b0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-7-d-6a70a4c656", "pod":"whisker-64c5644fb-vnkl9", "timestamp":"2026-04-21 09:58:55.012630459 +0000 UTC"}, Hostname:"ci-4081-3-7-d-6a70a4c656", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x4000410f20)} Apr 21 09:58:55.112411 containerd[1482]: 2026-04-21 09:58:55.023 [INFO][4019] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 09:58:55.112411 containerd[1482]: 2026-04-21 09:58:55.023 [INFO][4019] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 09:58:55.112411 containerd[1482]: 2026-04-21 09:58:55.023 [INFO][4019] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-7-d-6a70a4c656' Apr 21 09:58:55.112411 containerd[1482]: 2026-04-21 09:58:55.028 [INFO][4019] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.3de2c350a1e9f18b6d16661c340b4f9fc3ac38770c5eee994b0eee1ad1a7dcbd" host="ci-4081-3-7-d-6a70a4c656" Apr 21 09:58:55.112411 containerd[1482]: 2026-04-21 09:58:55.036 [INFO][4019] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081-3-7-d-6a70a4c656" Apr 21 09:58:55.112411 containerd[1482]: 2026-04-21 09:58:55.049 [INFO][4019] ipam/ipam.go 526: Trying affinity for 192.168.22.192/26 host="ci-4081-3-7-d-6a70a4c656" Apr 21 09:58:55.112411 containerd[1482]: 2026-04-21 09:58:55.052 [INFO][4019] ipam/ipam.go 160: Attempting to load block cidr=192.168.22.192/26 host="ci-4081-3-7-d-6a70a4c656" Apr 21 09:58:55.112411 containerd[1482]: 2026-04-21 09:58:55.056 [INFO][4019] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.22.192/26 host="ci-4081-3-7-d-6a70a4c656" Apr 21 09:58:55.112411 containerd[1482]: 2026-04-21 09:58:55.056 [INFO][4019] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.22.192/26 handle="k8s-pod-network.3de2c350a1e9f18b6d16661c340b4f9fc3ac38770c5eee994b0eee1ad1a7dcbd" host="ci-4081-3-7-d-6a70a4c656" Apr 21 09:58:55.112411 containerd[1482]: 2026-04-21 09:58:55.058 [INFO][4019] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.3de2c350a1e9f18b6d16661c340b4f9fc3ac38770c5eee994b0eee1ad1a7dcbd Apr 21 09:58:55.112411 containerd[1482]: 2026-04-21 09:58:55.064 [INFO][4019] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.22.192/26 handle="k8s-pod-network.3de2c350a1e9f18b6d16661c340b4f9fc3ac38770c5eee994b0eee1ad1a7dcbd" host="ci-4081-3-7-d-6a70a4c656" Apr 21 09:58:55.112411 containerd[1482]: 2026-04-21 09:58:55.073 [INFO][4019] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.22.194/26] block=192.168.22.192/26 handle="k8s-pod-network.3de2c350a1e9f18b6d16661c340b4f9fc3ac38770c5eee994b0eee1ad1a7dcbd" host="ci-4081-3-7-d-6a70a4c656" Apr 21 09:58:55.112411 containerd[1482]: 2026-04-21 09:58:55.073 [INFO][4019] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.22.194/26] handle="k8s-pod-network.3de2c350a1e9f18b6d16661c340b4f9fc3ac38770c5eee994b0eee1ad1a7dcbd" host="ci-4081-3-7-d-6a70a4c656" Apr 21 09:58:55.112411 containerd[1482]: 2026-04-21 09:58:55.073 [INFO][4019] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 09:58:55.112411 containerd[1482]: 2026-04-21 09:58:55.073 [INFO][4019] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.22.194/26] IPv6=[] ContainerID="3de2c350a1e9f18b6d16661c340b4f9fc3ac38770c5eee994b0eee1ad1a7dcbd" HandleID="k8s-pod-network.3de2c350a1e9f18b6d16661c340b4f9fc3ac38770c5eee994b0eee1ad1a7dcbd" Workload="ci--4081--3--7--d--6a70a4c656-k8s-whisker--64c5644fb--vnkl9-eth0" Apr 21 09:58:55.113903 containerd[1482]: 2026-04-21 09:58:55.081 [INFO][4008] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3de2c350a1e9f18b6d16661c340b4f9fc3ac38770c5eee994b0eee1ad1a7dcbd" Namespace="calico-system" Pod="whisker-64c5644fb-vnkl9" WorkloadEndpoint="ci--4081--3--7--d--6a70a4c656-k8s-whisker--64c5644fb--vnkl9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--d--6a70a4c656-k8s-whisker--64c5644fb--vnkl9-eth0", GenerateName:"whisker-64c5644fb-", Namespace:"calico-system", SelfLink:"", UID:"c8040323-2a93-42ca-bcc6-9d8f5a40af9a", ResourceVersion:"931", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 9, 58, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"64c5644fb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-d-6a70a4c656", ContainerID:"", Pod:"whisker-64c5644fb-vnkl9", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.22.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calieb8bd3ba7c4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 09:58:55.113903 containerd[1482]: 2026-04-21 09:58:55.081 [INFO][4008] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.22.194/32] ContainerID="3de2c350a1e9f18b6d16661c340b4f9fc3ac38770c5eee994b0eee1ad1a7dcbd" Namespace="calico-system" Pod="whisker-64c5644fb-vnkl9" WorkloadEndpoint="ci--4081--3--7--d--6a70a4c656-k8s-whisker--64c5644fb--vnkl9-eth0" Apr 21 09:58:55.113903 containerd[1482]: 2026-04-21 09:58:55.081 [INFO][4008] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calieb8bd3ba7c4 ContainerID="3de2c350a1e9f18b6d16661c340b4f9fc3ac38770c5eee994b0eee1ad1a7dcbd" Namespace="calico-system" Pod="whisker-64c5644fb-vnkl9" WorkloadEndpoint="ci--4081--3--7--d--6a70a4c656-k8s-whisker--64c5644fb--vnkl9-eth0" Apr 21 09:58:55.113903 containerd[1482]: 2026-04-21 09:58:55.089 [INFO][4008] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3de2c350a1e9f18b6d16661c340b4f9fc3ac38770c5eee994b0eee1ad1a7dcbd" Namespace="calico-system" Pod="whisker-64c5644fb-vnkl9" WorkloadEndpoint="ci--4081--3--7--d--6a70a4c656-k8s-whisker--64c5644fb--vnkl9-eth0" Apr 21 09:58:55.113903 containerd[1482]: 2026-04-21 09:58:55.091 [INFO][4008] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3de2c350a1e9f18b6d16661c340b4f9fc3ac38770c5eee994b0eee1ad1a7dcbd" Namespace="calico-system" Pod="whisker-64c5644fb-vnkl9" WorkloadEndpoint="ci--4081--3--7--d--6a70a4c656-k8s-whisker--64c5644fb--vnkl9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--d--6a70a4c656-k8s-whisker--64c5644fb--vnkl9-eth0", GenerateName:"whisker-64c5644fb-", Namespace:"calico-system", SelfLink:"", UID:"c8040323-2a93-42ca-bcc6-9d8f5a40af9a", ResourceVersion:"931", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 9, 58, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"64c5644fb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-d-6a70a4c656", ContainerID:"3de2c350a1e9f18b6d16661c340b4f9fc3ac38770c5eee994b0eee1ad1a7dcbd", Pod:"whisker-64c5644fb-vnkl9", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.22.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calieb8bd3ba7c4", MAC:"e6:e4:e7:4c:9c:12", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 09:58:55.113903 containerd[1482]: 2026-04-21 09:58:55.107 [INFO][4008] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3de2c350a1e9f18b6d16661c340b4f9fc3ac38770c5eee994b0eee1ad1a7dcbd" Namespace="calico-system" Pod="whisker-64c5644fb-vnkl9" WorkloadEndpoint="ci--4081--3--7--d--6a70a4c656-k8s-whisker--64c5644fb--vnkl9-eth0" Apr 21 09:58:55.147497 containerd[1482]: time="2026-04-21T09:58:55.147187940Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 09:58:55.147497 containerd[1482]: time="2026-04-21T09:58:55.147323053Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 09:58:55.147497 containerd[1482]: time="2026-04-21T09:58:55.147336620Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 09:58:55.149859 containerd[1482]: time="2026-04-21T09:58:55.147951632Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 09:58:55.182018 systemd[1]: Started cri-containerd-3de2c350a1e9f18b6d16661c340b4f9fc3ac38770c5eee994b0eee1ad1a7dcbd.scope - libcontainer container 3de2c350a1e9f18b6d16661c340b4f9fc3ac38770c5eee994b0eee1ad1a7dcbd. Apr 21 09:58:55.245894 kubelet[2514]: I0421 09:58:55.245554 2514 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ec3bce5-d5f3-44fd-a00a-602613380bcc" path="/var/lib/kubelet/pods/3ec3bce5-d5f3-44fd-a00a-602613380bcc/volumes" Apr 21 09:58:55.246649 containerd[1482]: time="2026-04-21T09:58:55.246599948Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-64c5644fb-vnkl9,Uid:c8040323-2a93-42ca-bcc6-9d8f5a40af9a,Namespace:calico-system,Attempt:0,} returns sandbox id \"3de2c350a1e9f18b6d16661c340b4f9fc3ac38770c5eee994b0eee1ad1a7dcbd\"" Apr 21 09:58:55.708210 containerd[1482]: time="2026-04-21T09:58:55.707030276Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 09:58:55.708210 containerd[1482]: time="2026-04-21T09:58:55.708163567Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.31.4: active requests=0, bytes read=8261497" Apr 21 09:58:55.708700 containerd[1482]: time="2026-04-21T09:58:55.708670360Z" level=info msg="ImageCreate event name:\"sha256:9cb4086a1b408b52c6b14e0b81520060e1766ee0243508d29d8a53c7b518051f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 09:58:55.711362 containerd[1482]: time="2026-04-21T09:58:55.711323991Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 09:58:55.712465 containerd[1482]: time="2026-04-21T09:58:55.712429588Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.31.4\" with image id \"sha256:9cb4086a1b408b52c6b14e0b81520060e1766ee0243508d29d8a53c7b518051f\", repo tag \"ghcr.io/flatcar/calico/csi:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\", size \"9659022\" in 2.174003844s" Apr 21 09:58:55.712602 containerd[1482]: time="2026-04-21T09:58:55.712579388Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\" returns image reference \"sha256:9cb4086a1b408b52c6b14e0b81520060e1766ee0243508d29d8a53c7b518051f\"" Apr 21 09:58:55.716151 containerd[1482]: time="2026-04-21T09:58:55.716114775Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\"" Apr 21 09:58:55.719435 containerd[1482]: time="2026-04-21T09:58:55.719109150Z" level=info msg="CreateContainer within sandbox \"f5bbeee58cfd9779837bb660260b208ac4b6451f4bfad2d068d0b89000428bec\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Apr 21 09:58:55.738593 containerd[1482]: time="2026-04-21T09:58:55.738339800Z" level=info msg="CreateContainer within sandbox \"f5bbeee58cfd9779837bb660260b208ac4b6451f4bfad2d068d0b89000428bec\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"70cabeb82490d67e84bc830de4427d167c50ec54a8cd959dfedd46453c3cb299\"" Apr 21 09:58:55.739746 containerd[1482]: time="2026-04-21T09:58:55.739702294Z" level=info msg="StartContainer for \"70cabeb82490d67e84bc830de4427d167c50ec54a8cd959dfedd46453c3cb299\"" Apr 21 09:58:55.772016 systemd[1]: Started cri-containerd-70cabeb82490d67e84bc830de4427d167c50ec54a8cd959dfedd46453c3cb299.scope - libcontainer container 70cabeb82490d67e84bc830de4427d167c50ec54a8cd959dfedd46453c3cb299. Apr 21 09:58:55.802520 containerd[1482]: time="2026-04-21T09:58:55.802319341Z" level=info msg="StartContainer for \"70cabeb82490d67e84bc830de4427d167c50ec54a8cd959dfedd46453c3cb299\" returns successfully" Apr 21 09:58:56.023994 systemd-networkd[1366]: vxlan.calico: Gained IPv6LL Apr 21 09:58:56.919185 systemd-networkd[1366]: calieb8bd3ba7c4: Gained IPv6LL Apr 21 09:58:57.314637 containerd[1482]: time="2026-04-21T09:58:57.314579196Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 09:58:57.316916 containerd[1482]: time="2026-04-21T09:58:57.316694156Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.31.4: active requests=0, bytes read=5882804" Apr 21 09:58:57.319848 containerd[1482]: time="2026-04-21T09:58:57.318388573Z" level=info msg="ImageCreate event name:\"sha256:51af4e9dcdb93e51b26a4a6f99272ec2df8de1aef256bb746f2c7c844b8e7b2c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 09:58:57.320927 containerd[1482]: time="2026-04-21T09:58:57.320884454Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 09:58:57.321972 containerd[1482]: time="2026-04-21T09:58:57.321863813Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.31.4\" with image id \"sha256:51af4e9dcdb93e51b26a4a6f99272ec2df8de1aef256bb746f2c7c844b8e7b2c\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\", size \"7280321\" in 1.605539204s" Apr 21 09:58:57.321972 containerd[1482]: time="2026-04-21T09:58:57.321903794Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\" returns image reference \"sha256:51af4e9dcdb93e51b26a4a6f99272ec2df8de1aef256bb746f2c7c844b8e7b2c\"" Apr 21 09:58:57.325400 containerd[1482]: time="2026-04-21T09:58:57.325214306Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\"" Apr 21 09:58:57.327868 containerd[1482]: time="2026-04-21T09:58:57.327834734Z" level=info msg="CreateContainer within sandbox \"3de2c350a1e9f18b6d16661c340b4f9fc3ac38770c5eee994b0eee1ad1a7dcbd\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Apr 21 09:58:57.345020 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1723470054.mount: Deactivated successfully. Apr 21 09:58:57.348228 containerd[1482]: time="2026-04-21T09:58:57.348042672Z" level=info msg="CreateContainer within sandbox \"3de2c350a1e9f18b6d16661c340b4f9fc3ac38770c5eee994b0eee1ad1a7dcbd\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"cad7d1faf08123e1e3964ac290f86d387d3198f4b619202b108ea5132397aaaa\"" Apr 21 09:58:57.350513 containerd[1482]: time="2026-04-21T09:58:57.350461112Z" level=info msg="StartContainer for \"cad7d1faf08123e1e3964ac290f86d387d3198f4b619202b108ea5132397aaaa\"" Apr 21 09:58:57.390160 systemd[1]: Started cri-containerd-cad7d1faf08123e1e3964ac290f86d387d3198f4b619202b108ea5132397aaaa.scope - libcontainer container cad7d1faf08123e1e3964ac290f86d387d3198f4b619202b108ea5132397aaaa. Apr 21 09:58:57.427588 containerd[1482]: time="2026-04-21T09:58:57.427512184Z" level=info msg="StartContainer for \"cad7d1faf08123e1e3964ac290f86d387d3198f4b619202b108ea5132397aaaa\" returns successfully" Apr 21 09:58:59.382841 containerd[1482]: time="2026-04-21T09:58:59.381740194Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4: active requests=0, bytes read=13766291" Apr 21 09:58:59.382841 containerd[1482]: time="2026-04-21T09:58:59.381797981Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 09:58:59.384696 containerd[1482]: time="2026-04-21T09:58:59.383995899Z" level=info msg="ImageCreate event name:\"sha256:8195c49a3b504e7ef58a8fc9a0e9ae66ae6ae90ef4998c04591be9588e8fa07e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 09:58:59.384696 containerd[1482]: time="2026-04-21T09:58:59.384651169Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 09:58:59.385750 containerd[1482]: time="2026-04-21T09:58:59.385460111Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" with image id \"sha256:8195c49a3b504e7ef58a8fc9a0e9ae66ae6ae90ef4998c04591be9588e8fa07e\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\", size \"15163768\" in 2.060208145s" Apr 21 09:58:59.385750 containerd[1482]: time="2026-04-21T09:58:59.385495048Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" returns image reference \"sha256:8195c49a3b504e7ef58a8fc9a0e9ae66ae6ae90ef4998c04591be9588e8fa07e\"" Apr 21 09:58:59.386921 containerd[1482]: time="2026-04-21T09:58:59.386888346Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\"" Apr 21 09:58:59.390967 containerd[1482]: time="2026-04-21T09:58:59.390918610Z" level=info msg="CreateContainer within sandbox \"f5bbeee58cfd9779837bb660260b208ac4b6451f4bfad2d068d0b89000428bec\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Apr 21 09:58:59.416001 containerd[1482]: time="2026-04-21T09:58:59.415871759Z" level=info msg="CreateContainer within sandbox \"f5bbeee58cfd9779837bb660260b208ac4b6451f4bfad2d068d0b89000428bec\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"82561b9e8ab6103c030ce767a9702dba3a2cd160e1cde1aba22b349265e4d5fb\"" Apr 21 09:58:59.419238 containerd[1482]: time="2026-04-21T09:58:59.417801951Z" level=info msg="StartContainer for \"82561b9e8ab6103c030ce767a9702dba3a2cd160e1cde1aba22b349265e4d5fb\"" Apr 21 09:58:59.452137 systemd[1]: Started cri-containerd-82561b9e8ab6103c030ce767a9702dba3a2cd160e1cde1aba22b349265e4d5fb.scope - libcontainer container 82561b9e8ab6103c030ce767a9702dba3a2cd160e1cde1aba22b349265e4d5fb. Apr 21 09:58:59.486866 containerd[1482]: time="2026-04-21T09:58:59.486782260Z" level=info msg="StartContainer for \"82561b9e8ab6103c030ce767a9702dba3a2cd160e1cde1aba22b349265e4d5fb\" returns successfully" Apr 21 09:58:59.510085 kubelet[2514]: I0421 09:58:59.510021 2514 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-lnmtt" podStartSLOduration=16.661483188 podStartE2EDuration="22.510005152s" podCreationTimestamp="2026-04-21 09:58:37 +0000 UTC" firstStartedPulling="2026-04-21 09:58:53.538010894 +0000 UTC m=+38.411760775" lastFinishedPulling="2026-04-21 09:58:59.386532858 +0000 UTC m=+44.260282739" observedRunningTime="2026-04-21 09:58:59.508961139 +0000 UTC m=+44.382711020" watchObservedRunningTime="2026-04-21 09:58:59.510005152 +0000 UTC m=+44.383755033" Apr 21 09:59:00.350652 kubelet[2514]: I0421 09:59:00.350370 2514 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Apr 21 09:59:00.350652 kubelet[2514]: I0421 09:59:00.350423 2514 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Apr 21 09:59:01.194861 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount538058010.mount: Deactivated successfully. Apr 21 09:59:01.210798 containerd[1482]: time="2026-04-21T09:59:01.210745353Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 09:59:01.212263 containerd[1482]: time="2026-04-21T09:59:01.212215734Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.31.4: active requests=0, bytes read=16426594" Apr 21 09:59:01.212885 containerd[1482]: time="2026-04-21T09:59:01.212809265Z" level=info msg="ImageCreate event name:\"sha256:19fab8e13a4d97732973f299576e43f89b889ceff6e3768f711f30e6ace1c662\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 09:59:01.216595 containerd[1482]: time="2026-04-21T09:59:01.216265405Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 09:59:01.217547 containerd[1482]: time="2026-04-21T09:59:01.217456828Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" with image id \"sha256:19fab8e13a4d97732973f299576e43f89b889ceff6e3768f711f30e6ace1c662\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\", size \"16426424\" in 1.830533547s" Apr 21 09:59:01.217547 containerd[1482]: time="2026-04-21T09:59:01.217494124Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" returns image reference \"sha256:19fab8e13a4d97732973f299576e43f89b889ceff6e3768f711f30e6ace1c662\"" Apr 21 09:59:01.224001 containerd[1482]: time="2026-04-21T09:59:01.223959294Z" level=info msg="CreateContainer within sandbox \"3de2c350a1e9f18b6d16661c340b4f9fc3ac38770c5eee994b0eee1ad1a7dcbd\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Apr 21 09:59:01.241418 containerd[1482]: time="2026-04-21T09:59:01.241282652Z" level=info msg="CreateContainer within sandbox \"3de2c350a1e9f18b6d16661c340b4f9fc3ac38770c5eee994b0eee1ad1a7dcbd\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"f0ff90f53c034b634f62510d0358d8690628852e8a210b2695fa0912a8eb26d3\"" Apr 21 09:59:01.242358 containerd[1482]: time="2026-04-21T09:59:01.242136092Z" level=info msg="StartContainer for \"f0ff90f53c034b634f62510d0358d8690628852e8a210b2695fa0912a8eb26d3\"" Apr 21 09:59:01.281203 systemd[1]: Started cri-containerd-f0ff90f53c034b634f62510d0358d8690628852e8a210b2695fa0912a8eb26d3.scope - libcontainer container f0ff90f53c034b634f62510d0358d8690628852e8a210b2695fa0912a8eb26d3. Apr 21 09:59:01.327885 containerd[1482]: time="2026-04-21T09:59:01.326547106Z" level=info msg="StartContainer for \"f0ff90f53c034b634f62510d0358d8690628852e8a210b2695fa0912a8eb26d3\" returns successfully" Apr 21 09:59:01.519083 kubelet[2514]: I0421 09:59:01.518867 2514 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-64c5644fb-vnkl9" podStartSLOduration=1.551105817 podStartE2EDuration="7.517600723s" podCreationTimestamp="2026-04-21 09:58:54 +0000 UTC" firstStartedPulling="2026-04-21 09:58:55.252858923 +0000 UTC m=+40.126608764" lastFinishedPulling="2026-04-21 09:59:01.219353789 +0000 UTC m=+46.093103670" observedRunningTime="2026-04-21 09:59:01.516994667 +0000 UTC m=+46.390744548" watchObservedRunningTime="2026-04-21 09:59:01.517600723 +0000 UTC m=+46.391350564" Apr 21 09:59:03.234169 containerd[1482]: time="2026-04-21T09:59:03.233562190Z" level=info msg="StopPodSandbox for \"884cd7957d9fda0434bd864702ea9779053a5698d779236dd1d3087362524ca8\"" Apr 21 09:59:03.235667 containerd[1482]: time="2026-04-21T09:59:03.235162595Z" level=info msg="StopPodSandbox for \"0aed76e5ecaf297b2736f954eb1d225b925d99cef754bef7e65c92a81b3d42da\"" Apr 21 09:59:03.239597 containerd[1482]: time="2026-04-21T09:59:03.238264409Z" level=info msg="StopPodSandbox for \"639a90f1031a191cb00e70a220d80a3b4b71dbf44d915b61414a094f3d38903f\"" Apr 21 09:59:03.403127 containerd[1482]: 2026-04-21 09:59:03.338 [INFO][4354] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="639a90f1031a191cb00e70a220d80a3b4b71dbf44d915b61414a094f3d38903f" Apr 21 09:59:03.403127 containerd[1482]: 2026-04-21 09:59:03.338 [INFO][4354] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="639a90f1031a191cb00e70a220d80a3b4b71dbf44d915b61414a094f3d38903f" iface="eth0" netns="/var/run/netns/cni-d571d095-c88c-194a-589c-723d61dae722" Apr 21 09:59:03.403127 containerd[1482]: 2026-04-21 09:59:03.341 [INFO][4354] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="639a90f1031a191cb00e70a220d80a3b4b71dbf44d915b61414a094f3d38903f" iface="eth0" netns="/var/run/netns/cni-d571d095-c88c-194a-589c-723d61dae722" Apr 21 09:59:03.403127 containerd[1482]: 2026-04-21 09:59:03.342 [INFO][4354] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="639a90f1031a191cb00e70a220d80a3b4b71dbf44d915b61414a094f3d38903f" iface="eth0" netns="/var/run/netns/cni-d571d095-c88c-194a-589c-723d61dae722" Apr 21 09:59:03.403127 containerd[1482]: 2026-04-21 09:59:03.342 [INFO][4354] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="639a90f1031a191cb00e70a220d80a3b4b71dbf44d915b61414a094f3d38903f" Apr 21 09:59:03.403127 containerd[1482]: 2026-04-21 09:59:03.342 [INFO][4354] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="639a90f1031a191cb00e70a220d80a3b4b71dbf44d915b61414a094f3d38903f" Apr 21 09:59:03.403127 containerd[1482]: 2026-04-21 09:59:03.378 [INFO][4371] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="639a90f1031a191cb00e70a220d80a3b4b71dbf44d915b61414a094f3d38903f" HandleID="k8s-pod-network.639a90f1031a191cb00e70a220d80a3b4b71dbf44d915b61414a094f3d38903f" Workload="ci--4081--3--7--d--6a70a4c656-k8s-coredns--674b8bbfcf--598qp-eth0" Apr 21 09:59:03.403127 containerd[1482]: 2026-04-21 09:59:03.378 [INFO][4371] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 09:59:03.403127 containerd[1482]: 2026-04-21 09:59:03.378 [INFO][4371] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 09:59:03.403127 containerd[1482]: 2026-04-21 09:59:03.393 [WARNING][4371] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="639a90f1031a191cb00e70a220d80a3b4b71dbf44d915b61414a094f3d38903f" HandleID="k8s-pod-network.639a90f1031a191cb00e70a220d80a3b4b71dbf44d915b61414a094f3d38903f" Workload="ci--4081--3--7--d--6a70a4c656-k8s-coredns--674b8bbfcf--598qp-eth0" Apr 21 09:59:03.403127 containerd[1482]: 2026-04-21 09:59:03.393 [INFO][4371] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="639a90f1031a191cb00e70a220d80a3b4b71dbf44d915b61414a094f3d38903f" HandleID="k8s-pod-network.639a90f1031a191cb00e70a220d80a3b4b71dbf44d915b61414a094f3d38903f" Workload="ci--4081--3--7--d--6a70a4c656-k8s-coredns--674b8bbfcf--598qp-eth0" Apr 21 09:59:03.403127 containerd[1482]: 2026-04-21 09:59:03.395 [INFO][4371] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 09:59:03.403127 containerd[1482]: 2026-04-21 09:59:03.399 [INFO][4354] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="639a90f1031a191cb00e70a220d80a3b4b71dbf44d915b61414a094f3d38903f" Apr 21 09:59:03.407283 containerd[1482]: time="2026-04-21T09:59:03.404508952Z" level=info msg="TearDown network for sandbox \"639a90f1031a191cb00e70a220d80a3b4b71dbf44d915b61414a094f3d38903f\" successfully" Apr 21 09:59:03.407283 containerd[1482]: time="2026-04-21T09:59:03.404561292Z" level=info msg="StopPodSandbox for \"639a90f1031a191cb00e70a220d80a3b4b71dbf44d915b61414a094f3d38903f\" returns successfully" Apr 21 09:59:03.407879 containerd[1482]: time="2026-04-21T09:59:03.407614407Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-598qp,Uid:7106ec2c-c759-48f7-8dd6-aae54768ee28,Namespace:kube-system,Attempt:1,}" Apr 21 09:59:03.413180 systemd[1]: run-netns-cni\x2dd571d095\x2dc88c\x2d194a\x2d589c\x2d723d61dae722.mount: Deactivated successfully. Apr 21 09:59:03.437667 containerd[1482]: 2026-04-21 09:59:03.332 [INFO][4341] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="0aed76e5ecaf297b2736f954eb1d225b925d99cef754bef7e65c92a81b3d42da" Apr 21 09:59:03.437667 containerd[1482]: 2026-04-21 09:59:03.332 [INFO][4341] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="0aed76e5ecaf297b2736f954eb1d225b925d99cef754bef7e65c92a81b3d42da" iface="eth0" netns="/var/run/netns/cni-d0208fb0-f3a5-e649-1daa-1ddd2cae3e5f" Apr 21 09:59:03.437667 containerd[1482]: 2026-04-21 09:59:03.332 [INFO][4341] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="0aed76e5ecaf297b2736f954eb1d225b925d99cef754bef7e65c92a81b3d42da" iface="eth0" netns="/var/run/netns/cni-d0208fb0-f3a5-e649-1daa-1ddd2cae3e5f" Apr 21 09:59:03.437667 containerd[1482]: 2026-04-21 09:59:03.335 [INFO][4341] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="0aed76e5ecaf297b2736f954eb1d225b925d99cef754bef7e65c92a81b3d42da" iface="eth0" netns="/var/run/netns/cni-d0208fb0-f3a5-e649-1daa-1ddd2cae3e5f" Apr 21 09:59:03.437667 containerd[1482]: 2026-04-21 09:59:03.336 [INFO][4341] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="0aed76e5ecaf297b2736f954eb1d225b925d99cef754bef7e65c92a81b3d42da" Apr 21 09:59:03.437667 containerd[1482]: 2026-04-21 09:59:03.336 [INFO][4341] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="0aed76e5ecaf297b2736f954eb1d225b925d99cef754bef7e65c92a81b3d42da" Apr 21 09:59:03.437667 containerd[1482]: 2026-04-21 09:59:03.383 [INFO][4366] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="0aed76e5ecaf297b2736f954eb1d225b925d99cef754bef7e65c92a81b3d42da" HandleID="k8s-pod-network.0aed76e5ecaf297b2736f954eb1d225b925d99cef754bef7e65c92a81b3d42da" Workload="ci--4081--3--7--d--6a70a4c656-k8s-calico--apiserver--b79cb9f78--rwr6l-eth0" Apr 21 09:59:03.437667 containerd[1482]: 2026-04-21 09:59:03.383 [INFO][4366] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 09:59:03.437667 containerd[1482]: 2026-04-21 09:59:03.395 [INFO][4366] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 09:59:03.437667 containerd[1482]: 2026-04-21 09:59:03.417 [WARNING][4366] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="0aed76e5ecaf297b2736f954eb1d225b925d99cef754bef7e65c92a81b3d42da" HandleID="k8s-pod-network.0aed76e5ecaf297b2736f954eb1d225b925d99cef754bef7e65c92a81b3d42da" Workload="ci--4081--3--7--d--6a70a4c656-k8s-calico--apiserver--b79cb9f78--rwr6l-eth0" Apr 21 09:59:03.437667 containerd[1482]: 2026-04-21 09:59:03.417 [INFO][4366] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="0aed76e5ecaf297b2736f954eb1d225b925d99cef754bef7e65c92a81b3d42da" HandleID="k8s-pod-network.0aed76e5ecaf297b2736f954eb1d225b925d99cef754bef7e65c92a81b3d42da" Workload="ci--4081--3--7--d--6a70a4c656-k8s-calico--apiserver--b79cb9f78--rwr6l-eth0" Apr 21 09:59:03.437667 containerd[1482]: 2026-04-21 09:59:03.420 [INFO][4366] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 09:59:03.437667 containerd[1482]: 2026-04-21 09:59:03.428 [INFO][4341] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="0aed76e5ecaf297b2736f954eb1d225b925d99cef754bef7e65c92a81b3d42da" Apr 21 09:59:03.440760 containerd[1482]: time="2026-04-21T09:59:03.439003243Z" level=info msg="TearDown network for sandbox \"0aed76e5ecaf297b2736f954eb1d225b925d99cef754bef7e65c92a81b3d42da\" successfully" Apr 21 09:59:03.440760 containerd[1482]: time="2026-04-21T09:59:03.439784499Z" level=info msg="StopPodSandbox for \"0aed76e5ecaf297b2736f954eb1d225b925d99cef754bef7e65c92a81b3d42da\" returns successfully" Apr 21 09:59:03.442467 systemd[1]: run-netns-cni\x2dd0208fb0\x2df3a5\x2de649\x2d1daa\x2d1ddd2cae3e5f.mount: Deactivated successfully. Apr 21 09:59:03.443882 containerd[1482]: time="2026-04-21T09:59:03.443735714Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-b79cb9f78-rwr6l,Uid:43fc62f9-80c2-420c-b7a3-69da00af4f8f,Namespace:calico-system,Attempt:1,}" Apr 21 09:59:03.455597 containerd[1482]: 2026-04-21 09:59:03.345 [INFO][4344] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="884cd7957d9fda0434bd864702ea9779053a5698d779236dd1d3087362524ca8" Apr 21 09:59:03.455597 containerd[1482]: 2026-04-21 09:59:03.345 [INFO][4344] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="884cd7957d9fda0434bd864702ea9779053a5698d779236dd1d3087362524ca8" iface="eth0" netns="/var/run/netns/cni-5f01019d-a092-02f7-7227-aabab63ee7a3" Apr 21 09:59:03.455597 containerd[1482]: 2026-04-21 09:59:03.346 [INFO][4344] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="884cd7957d9fda0434bd864702ea9779053a5698d779236dd1d3087362524ca8" iface="eth0" netns="/var/run/netns/cni-5f01019d-a092-02f7-7227-aabab63ee7a3" Apr 21 09:59:03.455597 containerd[1482]: 2026-04-21 09:59:03.346 [INFO][4344] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="884cd7957d9fda0434bd864702ea9779053a5698d779236dd1d3087362524ca8" iface="eth0" netns="/var/run/netns/cni-5f01019d-a092-02f7-7227-aabab63ee7a3" Apr 21 09:59:03.455597 containerd[1482]: 2026-04-21 09:59:03.346 [INFO][4344] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="884cd7957d9fda0434bd864702ea9779053a5698d779236dd1d3087362524ca8" Apr 21 09:59:03.455597 containerd[1482]: 2026-04-21 09:59:03.346 [INFO][4344] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="884cd7957d9fda0434bd864702ea9779053a5698d779236dd1d3087362524ca8" Apr 21 09:59:03.455597 containerd[1482]: 2026-04-21 09:59:03.396 [INFO][4373] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="884cd7957d9fda0434bd864702ea9779053a5698d779236dd1d3087362524ca8" HandleID="k8s-pod-network.884cd7957d9fda0434bd864702ea9779053a5698d779236dd1d3087362524ca8" Workload="ci--4081--3--7--d--6a70a4c656-k8s-coredns--674b8bbfcf--nm2rg-eth0" Apr 21 09:59:03.455597 containerd[1482]: 2026-04-21 09:59:03.397 [INFO][4373] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 09:59:03.455597 containerd[1482]: 2026-04-21 09:59:03.420 [INFO][4373] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 09:59:03.455597 containerd[1482]: 2026-04-21 09:59:03.446 [WARNING][4373] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="884cd7957d9fda0434bd864702ea9779053a5698d779236dd1d3087362524ca8" HandleID="k8s-pod-network.884cd7957d9fda0434bd864702ea9779053a5698d779236dd1d3087362524ca8" Workload="ci--4081--3--7--d--6a70a4c656-k8s-coredns--674b8bbfcf--nm2rg-eth0" Apr 21 09:59:03.455597 containerd[1482]: 2026-04-21 09:59:03.446 [INFO][4373] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="884cd7957d9fda0434bd864702ea9779053a5698d779236dd1d3087362524ca8" HandleID="k8s-pod-network.884cd7957d9fda0434bd864702ea9779053a5698d779236dd1d3087362524ca8" Workload="ci--4081--3--7--d--6a70a4c656-k8s-coredns--674b8bbfcf--nm2rg-eth0" Apr 21 09:59:03.455597 containerd[1482]: 2026-04-21 09:59:03.449 [INFO][4373] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 09:59:03.455597 containerd[1482]: 2026-04-21 09:59:03.452 [INFO][4344] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="884cd7957d9fda0434bd864702ea9779053a5698d779236dd1d3087362524ca8" Apr 21 09:59:03.457209 containerd[1482]: time="2026-04-21T09:59:03.456966000Z" level=info msg="TearDown network for sandbox \"884cd7957d9fda0434bd864702ea9779053a5698d779236dd1d3087362524ca8\" successfully" Apr 21 09:59:03.457209 containerd[1482]: time="2026-04-21T09:59:03.457036427Z" level=info msg="StopPodSandbox for \"884cd7957d9fda0434bd864702ea9779053a5698d779236dd1d3087362524ca8\" returns successfully" Apr 21 09:59:03.459560 containerd[1482]: time="2026-04-21T09:59:03.457964818Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-nm2rg,Uid:b02e60bd-7bf5-4d67-90f0-2644154eb8f8,Namespace:kube-system,Attempt:1,}" Apr 21 09:59:03.459617 systemd[1]: run-netns-cni\x2d5f01019d\x2da092\x2d02f7\x2d7227\x2daabab63ee7a3.mount: Deactivated successfully. Apr 21 09:59:03.648662 kubelet[2514]: I0421 09:59:03.647865 2514 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 21 09:59:03.669910 systemd-networkd[1366]: cali57d6176e888: Link UP Apr 21 09:59:03.670246 systemd-networkd[1366]: cali57d6176e888: Gained carrier Apr 21 09:59:03.693839 containerd[1482]: 2026-04-21 09:59:03.496 [INFO][4387] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--7--d--6a70a4c656-k8s-coredns--674b8bbfcf--598qp-eth0 coredns-674b8bbfcf- kube-system 7106ec2c-c759-48f7-8dd6-aae54768ee28 979 0 2026-04-21 09:58:22 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-3-7-d-6a70a4c656 coredns-674b8bbfcf-598qp eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali57d6176e888 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="510f276c9806d88b8a2daca5f5ed11af2e580720665ed121b1cf56a6eb1a1437" Namespace="kube-system" Pod="coredns-674b8bbfcf-598qp" WorkloadEndpoint="ci--4081--3--7--d--6a70a4c656-k8s-coredns--674b8bbfcf--598qp-" Apr 21 09:59:03.693839 containerd[1482]: 2026-04-21 09:59:03.496 [INFO][4387] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="510f276c9806d88b8a2daca5f5ed11af2e580720665ed121b1cf56a6eb1a1437" Namespace="kube-system" Pod="coredns-674b8bbfcf-598qp" WorkloadEndpoint="ci--4081--3--7--d--6a70a4c656-k8s-coredns--674b8bbfcf--598qp-eth0" Apr 21 09:59:03.693839 containerd[1482]: 2026-04-21 09:59:03.571 [INFO][4419] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="510f276c9806d88b8a2daca5f5ed11af2e580720665ed121b1cf56a6eb1a1437" HandleID="k8s-pod-network.510f276c9806d88b8a2daca5f5ed11af2e580720665ed121b1cf56a6eb1a1437" Workload="ci--4081--3--7--d--6a70a4c656-k8s-coredns--674b8bbfcf--598qp-eth0" Apr 21 09:59:03.693839 containerd[1482]: 2026-04-21 09:59:03.590 [INFO][4419] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="510f276c9806d88b8a2daca5f5ed11af2e580720665ed121b1cf56a6eb1a1437" HandleID="k8s-pod-network.510f276c9806d88b8a2daca5f5ed11af2e580720665ed121b1cf56a6eb1a1437" Workload="ci--4081--3--7--d--6a70a4c656-k8s-coredns--674b8bbfcf--598qp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002fbc90), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-3-7-d-6a70a4c656", "pod":"coredns-674b8bbfcf-598qp", "timestamp":"2026-04-21 09:59:03.571686487 +0000 UTC"}, Hostname:"ci-4081-3-7-d-6a70a4c656", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x4000431600)} Apr 21 09:59:03.693839 containerd[1482]: 2026-04-21 09:59:03.590 [INFO][4419] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 09:59:03.693839 containerd[1482]: 2026-04-21 09:59:03.590 [INFO][4419] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 09:59:03.693839 containerd[1482]: 2026-04-21 09:59:03.592 [INFO][4419] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-7-d-6a70a4c656' Apr 21 09:59:03.693839 containerd[1482]: 2026-04-21 09:59:03.601 [INFO][4419] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.510f276c9806d88b8a2daca5f5ed11af2e580720665ed121b1cf56a6eb1a1437" host="ci-4081-3-7-d-6a70a4c656" Apr 21 09:59:03.693839 containerd[1482]: 2026-04-21 09:59:03.620 [INFO][4419] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081-3-7-d-6a70a4c656" Apr 21 09:59:03.693839 containerd[1482]: 2026-04-21 09:59:03.627 [INFO][4419] ipam/ipam.go 526: Trying affinity for 192.168.22.192/26 host="ci-4081-3-7-d-6a70a4c656" Apr 21 09:59:03.693839 containerd[1482]: 2026-04-21 09:59:03.630 [INFO][4419] ipam/ipam.go 160: Attempting to load block cidr=192.168.22.192/26 host="ci-4081-3-7-d-6a70a4c656" Apr 21 09:59:03.693839 containerd[1482]: 2026-04-21 09:59:03.634 [INFO][4419] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.22.192/26 host="ci-4081-3-7-d-6a70a4c656" Apr 21 09:59:03.693839 containerd[1482]: 2026-04-21 09:59:03.634 [INFO][4419] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.22.192/26 handle="k8s-pod-network.510f276c9806d88b8a2daca5f5ed11af2e580720665ed121b1cf56a6eb1a1437" host="ci-4081-3-7-d-6a70a4c656" Apr 21 09:59:03.693839 containerd[1482]: 2026-04-21 09:59:03.636 [INFO][4419] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.510f276c9806d88b8a2daca5f5ed11af2e580720665ed121b1cf56a6eb1a1437 Apr 21 09:59:03.693839 containerd[1482]: 2026-04-21 09:59:03.643 [INFO][4419] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.22.192/26 handle="k8s-pod-network.510f276c9806d88b8a2daca5f5ed11af2e580720665ed121b1cf56a6eb1a1437" host="ci-4081-3-7-d-6a70a4c656" Apr 21 09:59:03.693839 containerd[1482]: 2026-04-21 09:59:03.656 [INFO][4419] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.22.195/26] block=192.168.22.192/26 handle="k8s-pod-network.510f276c9806d88b8a2daca5f5ed11af2e580720665ed121b1cf56a6eb1a1437" host="ci-4081-3-7-d-6a70a4c656" Apr 21 09:59:03.693839 containerd[1482]: 2026-04-21 09:59:03.656 [INFO][4419] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.22.195/26] handle="k8s-pod-network.510f276c9806d88b8a2daca5f5ed11af2e580720665ed121b1cf56a6eb1a1437" host="ci-4081-3-7-d-6a70a4c656" Apr 21 09:59:03.693839 containerd[1482]: 2026-04-21 09:59:03.656 [INFO][4419] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 09:59:03.693839 containerd[1482]: 2026-04-21 09:59:03.656 [INFO][4419] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.22.195/26] IPv6=[] ContainerID="510f276c9806d88b8a2daca5f5ed11af2e580720665ed121b1cf56a6eb1a1437" HandleID="k8s-pod-network.510f276c9806d88b8a2daca5f5ed11af2e580720665ed121b1cf56a6eb1a1437" Workload="ci--4081--3--7--d--6a70a4c656-k8s-coredns--674b8bbfcf--598qp-eth0" Apr 21 09:59:03.695352 containerd[1482]: 2026-04-21 09:59:03.663 [INFO][4387] cni-plugin/k8s.go 418: Populated endpoint ContainerID="510f276c9806d88b8a2daca5f5ed11af2e580720665ed121b1cf56a6eb1a1437" Namespace="kube-system" Pod="coredns-674b8bbfcf-598qp" WorkloadEndpoint="ci--4081--3--7--d--6a70a4c656-k8s-coredns--674b8bbfcf--598qp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--d--6a70a4c656-k8s-coredns--674b8bbfcf--598qp-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"7106ec2c-c759-48f7-8dd6-aae54768ee28", ResourceVersion:"979", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 9, 58, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-d-6a70a4c656", ContainerID:"", Pod:"coredns-674b8bbfcf-598qp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.22.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali57d6176e888", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 09:59:03.695352 containerd[1482]: 2026-04-21 09:59:03.663 [INFO][4387] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.22.195/32] ContainerID="510f276c9806d88b8a2daca5f5ed11af2e580720665ed121b1cf56a6eb1a1437" Namespace="kube-system" Pod="coredns-674b8bbfcf-598qp" WorkloadEndpoint="ci--4081--3--7--d--6a70a4c656-k8s-coredns--674b8bbfcf--598qp-eth0" Apr 21 09:59:03.695352 containerd[1482]: 2026-04-21 09:59:03.663 [INFO][4387] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali57d6176e888 ContainerID="510f276c9806d88b8a2daca5f5ed11af2e580720665ed121b1cf56a6eb1a1437" Namespace="kube-system" Pod="coredns-674b8bbfcf-598qp" WorkloadEndpoint="ci--4081--3--7--d--6a70a4c656-k8s-coredns--674b8bbfcf--598qp-eth0" Apr 21 09:59:03.695352 containerd[1482]: 2026-04-21 09:59:03.669 [INFO][4387] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="510f276c9806d88b8a2daca5f5ed11af2e580720665ed121b1cf56a6eb1a1437" Namespace="kube-system" Pod="coredns-674b8bbfcf-598qp" WorkloadEndpoint="ci--4081--3--7--d--6a70a4c656-k8s-coredns--674b8bbfcf--598qp-eth0" Apr 21 09:59:03.695352 containerd[1482]: 2026-04-21 09:59:03.671 [INFO][4387] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="510f276c9806d88b8a2daca5f5ed11af2e580720665ed121b1cf56a6eb1a1437" Namespace="kube-system" Pod="coredns-674b8bbfcf-598qp" WorkloadEndpoint="ci--4081--3--7--d--6a70a4c656-k8s-coredns--674b8bbfcf--598qp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--d--6a70a4c656-k8s-coredns--674b8bbfcf--598qp-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"7106ec2c-c759-48f7-8dd6-aae54768ee28", ResourceVersion:"979", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 9, 58, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-d-6a70a4c656", ContainerID:"510f276c9806d88b8a2daca5f5ed11af2e580720665ed121b1cf56a6eb1a1437", Pod:"coredns-674b8bbfcf-598qp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.22.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali57d6176e888", MAC:"52:c3:8e:06:ba:8a", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 09:59:03.695352 containerd[1482]: 2026-04-21 09:59:03.688 [INFO][4387] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="510f276c9806d88b8a2daca5f5ed11af2e580720665ed121b1cf56a6eb1a1437" Namespace="kube-system" Pod="coredns-674b8bbfcf-598qp" WorkloadEndpoint="ci--4081--3--7--d--6a70a4c656-k8s-coredns--674b8bbfcf--598qp-eth0" Apr 21 09:59:03.733218 containerd[1482]: time="2026-04-21T09:59:03.732942543Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 09:59:03.733218 containerd[1482]: time="2026-04-21T09:59:03.733064749Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 09:59:03.733218 containerd[1482]: time="2026-04-21T09:59:03.733108325Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 09:59:03.733555 containerd[1482]: time="2026-04-21T09:59:03.733447494Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 09:59:03.767331 systemd[1]: Started cri-containerd-510f276c9806d88b8a2daca5f5ed11af2e580720665ed121b1cf56a6eb1a1437.scope - libcontainer container 510f276c9806d88b8a2daca5f5ed11af2e580720665ed121b1cf56a6eb1a1437. Apr 21 09:59:03.782899 systemd-networkd[1366]: cali860362aca7a: Link UP Apr 21 09:59:03.783597 systemd-networkd[1366]: cali860362aca7a: Gained carrier Apr 21 09:59:03.812737 containerd[1482]: 2026-04-21 09:59:03.531 [INFO][4396] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--7--d--6a70a4c656-k8s-calico--apiserver--b79cb9f78--rwr6l-eth0 calico-apiserver-b79cb9f78- calico-system 43fc62f9-80c2-420c-b7a3-69da00af4f8f 978 0 2026-04-21 09:58:36 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:b79cb9f78 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-7-d-6a70a4c656 calico-apiserver-b79cb9f78-rwr6l eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali860362aca7a [] [] }} ContainerID="4b91994e66d61eb26371f9c8c9811d630240aa81013f001ade6da763a479f959" Namespace="calico-system" Pod="calico-apiserver-b79cb9f78-rwr6l" WorkloadEndpoint="ci--4081--3--7--d--6a70a4c656-k8s-calico--apiserver--b79cb9f78--rwr6l-" Apr 21 09:59:03.812737 containerd[1482]: 2026-04-21 09:59:03.532 [INFO][4396] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4b91994e66d61eb26371f9c8c9811d630240aa81013f001ade6da763a479f959" Namespace="calico-system" Pod="calico-apiserver-b79cb9f78-rwr6l" WorkloadEndpoint="ci--4081--3--7--d--6a70a4c656-k8s-calico--apiserver--b79cb9f78--rwr6l-eth0" Apr 21 09:59:03.812737 containerd[1482]: 2026-04-21 09:59:03.586 [INFO][4428] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4b91994e66d61eb26371f9c8c9811d630240aa81013f001ade6da763a479f959" HandleID="k8s-pod-network.4b91994e66d61eb26371f9c8c9811d630240aa81013f001ade6da763a479f959" Workload="ci--4081--3--7--d--6a70a4c656-k8s-calico--apiserver--b79cb9f78--rwr6l-eth0" Apr 21 09:59:03.812737 containerd[1482]: 2026-04-21 09:59:03.603 [INFO][4428] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="4b91994e66d61eb26371f9c8c9811d630240aa81013f001ade6da763a479f959" HandleID="k8s-pod-network.4b91994e66d61eb26371f9c8c9811d630240aa81013f001ade6da763a479f959" Workload="ci--4081--3--7--d--6a70a4c656-k8s-calico--apiserver--b79cb9f78--rwr6l-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000103150), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-7-d-6a70a4c656", "pod":"calico-apiserver-b79cb9f78-rwr6l", "timestamp":"2026-04-21 09:59:03.586252559 +0000 UTC"}, Hostname:"ci-4081-3-7-d-6a70a4c656", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x400018c2c0)} Apr 21 09:59:03.812737 containerd[1482]: 2026-04-21 09:59:03.603 [INFO][4428] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 09:59:03.812737 containerd[1482]: 2026-04-21 09:59:03.658 [INFO][4428] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 09:59:03.812737 containerd[1482]: 2026-04-21 09:59:03.658 [INFO][4428] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-7-d-6a70a4c656' Apr 21 09:59:03.812737 containerd[1482]: 2026-04-21 09:59:03.703 [INFO][4428] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.4b91994e66d61eb26371f9c8c9811d630240aa81013f001ade6da763a479f959" host="ci-4081-3-7-d-6a70a4c656" Apr 21 09:59:03.812737 containerd[1482]: 2026-04-21 09:59:03.721 [INFO][4428] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081-3-7-d-6a70a4c656" Apr 21 09:59:03.812737 containerd[1482]: 2026-04-21 09:59:03.731 [INFO][4428] ipam/ipam.go 526: Trying affinity for 192.168.22.192/26 host="ci-4081-3-7-d-6a70a4c656" Apr 21 09:59:03.812737 containerd[1482]: 2026-04-21 09:59:03.735 [INFO][4428] ipam/ipam.go 160: Attempting to load block cidr=192.168.22.192/26 host="ci-4081-3-7-d-6a70a4c656" Apr 21 09:59:03.812737 containerd[1482]: 2026-04-21 09:59:03.741 [INFO][4428] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.22.192/26 host="ci-4081-3-7-d-6a70a4c656" Apr 21 09:59:03.812737 containerd[1482]: 2026-04-21 09:59:03.741 [INFO][4428] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.22.192/26 handle="k8s-pod-network.4b91994e66d61eb26371f9c8c9811d630240aa81013f001ade6da763a479f959" host="ci-4081-3-7-d-6a70a4c656" Apr 21 09:59:03.812737 containerd[1482]: 2026-04-21 09:59:03.744 [INFO][4428] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.4b91994e66d61eb26371f9c8c9811d630240aa81013f001ade6da763a479f959 Apr 21 09:59:03.812737 containerd[1482]: 2026-04-21 09:59:03.752 [INFO][4428] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.22.192/26 handle="k8s-pod-network.4b91994e66d61eb26371f9c8c9811d630240aa81013f001ade6da763a479f959" host="ci-4081-3-7-d-6a70a4c656" Apr 21 09:59:03.812737 containerd[1482]: 2026-04-21 09:59:03.766 [INFO][4428] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.22.196/26] block=192.168.22.192/26 handle="k8s-pod-network.4b91994e66d61eb26371f9c8c9811d630240aa81013f001ade6da763a479f959" host="ci-4081-3-7-d-6a70a4c656" Apr 21 09:59:03.812737 containerd[1482]: 2026-04-21 09:59:03.766 [INFO][4428] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.22.196/26] handle="k8s-pod-network.4b91994e66d61eb26371f9c8c9811d630240aa81013f001ade6da763a479f959" host="ci-4081-3-7-d-6a70a4c656" Apr 21 09:59:03.812737 containerd[1482]: 2026-04-21 09:59:03.768 [INFO][4428] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 09:59:03.812737 containerd[1482]: 2026-04-21 09:59:03.769 [INFO][4428] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.22.196/26] IPv6=[] ContainerID="4b91994e66d61eb26371f9c8c9811d630240aa81013f001ade6da763a479f959" HandleID="k8s-pod-network.4b91994e66d61eb26371f9c8c9811d630240aa81013f001ade6da763a479f959" Workload="ci--4081--3--7--d--6a70a4c656-k8s-calico--apiserver--b79cb9f78--rwr6l-eth0" Apr 21 09:59:03.813543 containerd[1482]: 2026-04-21 09:59:03.776 [INFO][4396] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4b91994e66d61eb26371f9c8c9811d630240aa81013f001ade6da763a479f959" Namespace="calico-system" Pod="calico-apiserver-b79cb9f78-rwr6l" WorkloadEndpoint="ci--4081--3--7--d--6a70a4c656-k8s-calico--apiserver--b79cb9f78--rwr6l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--d--6a70a4c656-k8s-calico--apiserver--b79cb9f78--rwr6l-eth0", GenerateName:"calico-apiserver-b79cb9f78-", Namespace:"calico-system", SelfLink:"", UID:"43fc62f9-80c2-420c-b7a3-69da00af4f8f", ResourceVersion:"978", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 9, 58, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"b79cb9f78", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-d-6a70a4c656", ContainerID:"", Pod:"calico-apiserver-b79cb9f78-rwr6l", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.22.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali860362aca7a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 09:59:03.813543 containerd[1482]: 2026-04-21 09:59:03.778 [INFO][4396] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.22.196/32] ContainerID="4b91994e66d61eb26371f9c8c9811d630240aa81013f001ade6da763a479f959" Namespace="calico-system" Pod="calico-apiserver-b79cb9f78-rwr6l" WorkloadEndpoint="ci--4081--3--7--d--6a70a4c656-k8s-calico--apiserver--b79cb9f78--rwr6l-eth0" Apr 21 09:59:03.813543 containerd[1482]: 2026-04-21 09:59:03.779 [INFO][4396] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali860362aca7a ContainerID="4b91994e66d61eb26371f9c8c9811d630240aa81013f001ade6da763a479f959" Namespace="calico-system" Pod="calico-apiserver-b79cb9f78-rwr6l" WorkloadEndpoint="ci--4081--3--7--d--6a70a4c656-k8s-calico--apiserver--b79cb9f78--rwr6l-eth0" Apr 21 09:59:03.813543 containerd[1482]: 2026-04-21 09:59:03.782 [INFO][4396] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4b91994e66d61eb26371f9c8c9811d630240aa81013f001ade6da763a479f959" Namespace="calico-system" Pod="calico-apiserver-b79cb9f78-rwr6l" WorkloadEndpoint="ci--4081--3--7--d--6a70a4c656-k8s-calico--apiserver--b79cb9f78--rwr6l-eth0" Apr 21 09:59:03.813543 containerd[1482]: 2026-04-21 09:59:03.783 [INFO][4396] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4b91994e66d61eb26371f9c8c9811d630240aa81013f001ade6da763a479f959" Namespace="calico-system" Pod="calico-apiserver-b79cb9f78-rwr6l" WorkloadEndpoint="ci--4081--3--7--d--6a70a4c656-k8s-calico--apiserver--b79cb9f78--rwr6l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--d--6a70a4c656-k8s-calico--apiserver--b79cb9f78--rwr6l-eth0", GenerateName:"calico-apiserver-b79cb9f78-", Namespace:"calico-system", SelfLink:"", UID:"43fc62f9-80c2-420c-b7a3-69da00af4f8f", ResourceVersion:"978", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 9, 58, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"b79cb9f78", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-d-6a70a4c656", ContainerID:"4b91994e66d61eb26371f9c8c9811d630240aa81013f001ade6da763a479f959", Pod:"calico-apiserver-b79cb9f78-rwr6l", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.22.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali860362aca7a", MAC:"3e:b2:4b:db:e1:be", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 09:59:03.813543 containerd[1482]: 2026-04-21 09:59:03.807 [INFO][4396] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4b91994e66d61eb26371f9c8c9811d630240aa81013f001ade6da763a479f959" Namespace="calico-system" Pod="calico-apiserver-b79cb9f78-rwr6l" WorkloadEndpoint="ci--4081--3--7--d--6a70a4c656-k8s-calico--apiserver--b79cb9f78--rwr6l-eth0" Apr 21 09:59:03.852634 containerd[1482]: time="2026-04-21T09:59:03.851175359Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-598qp,Uid:7106ec2c-c759-48f7-8dd6-aae54768ee28,Namespace:kube-system,Attempt:1,} returns sandbox id \"510f276c9806d88b8a2daca5f5ed11af2e580720665ed121b1cf56a6eb1a1437\"" Apr 21 09:59:03.861592 containerd[1482]: time="2026-04-21T09:59:03.861466613Z" level=info msg="CreateContainer within sandbox \"510f276c9806d88b8a2daca5f5ed11af2e580720665ed121b1cf56a6eb1a1437\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 21 09:59:03.863424 containerd[1482]: time="2026-04-21T09:59:03.862794875Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 09:59:03.863424 containerd[1482]: time="2026-04-21T09:59:03.863065418Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 09:59:03.863424 containerd[1482]: time="2026-04-21T09:59:03.863078223Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 09:59:03.863424 containerd[1482]: time="2026-04-21T09:59:03.863178301Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 09:59:03.901308 containerd[1482]: time="2026-04-21T09:59:03.901167355Z" level=info msg="CreateContainer within sandbox \"510f276c9806d88b8a2daca5f5ed11af2e580720665ed121b1cf56a6eb1a1437\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"971343ef5d394dd99d0af7fd65488b27fe76e68512e6c5075744ec0f274ec141\"" Apr 21 09:59:03.906842 containerd[1482]: time="2026-04-21T09:59:03.906010107Z" level=info msg="StartContainer for \"971343ef5d394dd99d0af7fd65488b27fe76e68512e6c5075744ec0f274ec141\"" Apr 21 09:59:03.910024 systemd[1]: Started cri-containerd-4b91994e66d61eb26371f9c8c9811d630240aa81013f001ade6da763a479f959.scope - libcontainer container 4b91994e66d61eb26371f9c8c9811d630240aa81013f001ade6da763a479f959. Apr 21 09:59:03.914233 systemd-networkd[1366]: cali27bd019bd09: Link UP Apr 21 09:59:03.914578 systemd-networkd[1366]: cali27bd019bd09: Gained carrier Apr 21 09:59:03.957492 containerd[1482]: 2026-04-21 09:59:03.554 [INFO][4408] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--7--d--6a70a4c656-k8s-coredns--674b8bbfcf--nm2rg-eth0 coredns-674b8bbfcf- kube-system b02e60bd-7bf5-4d67-90f0-2644154eb8f8 980 0 2026-04-21 09:58:22 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-3-7-d-6a70a4c656 coredns-674b8bbfcf-nm2rg eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali27bd019bd09 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="5180b11b23b9c87ab89cb693347b64082e48a82a37aeb881846fc64df5c48136" Namespace="kube-system" Pod="coredns-674b8bbfcf-nm2rg" WorkloadEndpoint="ci--4081--3--7--d--6a70a4c656-k8s-coredns--674b8bbfcf--nm2rg-" Apr 21 09:59:03.957492 containerd[1482]: 2026-04-21 09:59:03.554 [INFO][4408] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5180b11b23b9c87ab89cb693347b64082e48a82a37aeb881846fc64df5c48136" Namespace="kube-system" Pod="coredns-674b8bbfcf-nm2rg" WorkloadEndpoint="ci--4081--3--7--d--6a70a4c656-k8s-coredns--674b8bbfcf--nm2rg-eth0" Apr 21 09:59:03.957492 containerd[1482]: 2026-04-21 09:59:03.603 [INFO][4435] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5180b11b23b9c87ab89cb693347b64082e48a82a37aeb881846fc64df5c48136" HandleID="k8s-pod-network.5180b11b23b9c87ab89cb693347b64082e48a82a37aeb881846fc64df5c48136" Workload="ci--4081--3--7--d--6a70a4c656-k8s-coredns--674b8bbfcf--nm2rg-eth0" Apr 21 09:59:03.957492 containerd[1482]: 2026-04-21 09:59:03.619 [INFO][4435] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="5180b11b23b9c87ab89cb693347b64082e48a82a37aeb881846fc64df5c48136" HandleID="k8s-pod-network.5180b11b23b9c87ab89cb693347b64082e48a82a37aeb881846fc64df5c48136" Workload="ci--4081--3--7--d--6a70a4c656-k8s-coredns--674b8bbfcf--nm2rg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002f9e80), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-3-7-d-6a70a4c656", "pod":"coredns-674b8bbfcf-nm2rg", "timestamp":"2026-04-21 09:59:03.603964701 +0000 UTC"}, Hostname:"ci-4081-3-7-d-6a70a4c656", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x400055d080)} Apr 21 09:59:03.957492 containerd[1482]: 2026-04-21 09:59:03.619 [INFO][4435] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 09:59:03.957492 containerd[1482]: 2026-04-21 09:59:03.774 [INFO][4435] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 09:59:03.957492 containerd[1482]: 2026-04-21 09:59:03.774 [INFO][4435] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-7-d-6a70a4c656' Apr 21 09:59:03.957492 containerd[1482]: 2026-04-21 09:59:03.803 [INFO][4435] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.5180b11b23b9c87ab89cb693347b64082e48a82a37aeb881846fc64df5c48136" host="ci-4081-3-7-d-6a70a4c656" Apr 21 09:59:03.957492 containerd[1482]: 2026-04-21 09:59:03.820 [INFO][4435] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081-3-7-d-6a70a4c656" Apr 21 09:59:03.957492 containerd[1482]: 2026-04-21 09:59:03.837 [INFO][4435] ipam/ipam.go 526: Trying affinity for 192.168.22.192/26 host="ci-4081-3-7-d-6a70a4c656" Apr 21 09:59:03.957492 containerd[1482]: 2026-04-21 09:59:03.843 [INFO][4435] ipam/ipam.go 160: Attempting to load block cidr=192.168.22.192/26 host="ci-4081-3-7-d-6a70a4c656" Apr 21 09:59:03.957492 containerd[1482]: 2026-04-21 09:59:03.851 [INFO][4435] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.22.192/26 host="ci-4081-3-7-d-6a70a4c656" Apr 21 09:59:03.957492 containerd[1482]: 2026-04-21 09:59:03.851 [INFO][4435] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.22.192/26 handle="k8s-pod-network.5180b11b23b9c87ab89cb693347b64082e48a82a37aeb881846fc64df5c48136" host="ci-4081-3-7-d-6a70a4c656" Apr 21 09:59:03.957492 containerd[1482]: 2026-04-21 09:59:03.855 [INFO][4435] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.5180b11b23b9c87ab89cb693347b64082e48a82a37aeb881846fc64df5c48136 Apr 21 09:59:03.957492 containerd[1482]: 2026-04-21 09:59:03.872 [INFO][4435] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.22.192/26 handle="k8s-pod-network.5180b11b23b9c87ab89cb693347b64082e48a82a37aeb881846fc64df5c48136" host="ci-4081-3-7-d-6a70a4c656" Apr 21 09:59:03.957492 containerd[1482]: 2026-04-21 09:59:03.888 [INFO][4435] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.22.197/26] block=192.168.22.192/26 handle="k8s-pod-network.5180b11b23b9c87ab89cb693347b64082e48a82a37aeb881846fc64df5c48136" host="ci-4081-3-7-d-6a70a4c656" Apr 21 09:59:03.957492 containerd[1482]: 2026-04-21 09:59:03.889 [INFO][4435] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.22.197/26] handle="k8s-pod-network.5180b11b23b9c87ab89cb693347b64082e48a82a37aeb881846fc64df5c48136" host="ci-4081-3-7-d-6a70a4c656" Apr 21 09:59:03.957492 containerd[1482]: 2026-04-21 09:59:03.890 [INFO][4435] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 09:59:03.957492 containerd[1482]: 2026-04-21 09:59:03.890 [INFO][4435] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.22.197/26] IPv6=[] ContainerID="5180b11b23b9c87ab89cb693347b64082e48a82a37aeb881846fc64df5c48136" HandleID="k8s-pod-network.5180b11b23b9c87ab89cb693347b64082e48a82a37aeb881846fc64df5c48136" Workload="ci--4081--3--7--d--6a70a4c656-k8s-coredns--674b8bbfcf--nm2rg-eth0" Apr 21 09:59:03.958218 containerd[1482]: 2026-04-21 09:59:03.898 [INFO][4408] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5180b11b23b9c87ab89cb693347b64082e48a82a37aeb881846fc64df5c48136" Namespace="kube-system" Pod="coredns-674b8bbfcf-nm2rg" WorkloadEndpoint="ci--4081--3--7--d--6a70a4c656-k8s-coredns--674b8bbfcf--nm2rg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--d--6a70a4c656-k8s-coredns--674b8bbfcf--nm2rg-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"b02e60bd-7bf5-4d67-90f0-2644154eb8f8", ResourceVersion:"980", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 9, 58, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-d-6a70a4c656", ContainerID:"", Pod:"coredns-674b8bbfcf-nm2rg", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.22.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali27bd019bd09", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 09:59:03.958218 containerd[1482]: 2026-04-21 09:59:03.899 [INFO][4408] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.22.197/32] ContainerID="5180b11b23b9c87ab89cb693347b64082e48a82a37aeb881846fc64df5c48136" Namespace="kube-system" Pod="coredns-674b8bbfcf-nm2rg" WorkloadEndpoint="ci--4081--3--7--d--6a70a4c656-k8s-coredns--674b8bbfcf--nm2rg-eth0" Apr 21 09:59:03.958218 containerd[1482]: 2026-04-21 09:59:03.899 [INFO][4408] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali27bd019bd09 ContainerID="5180b11b23b9c87ab89cb693347b64082e48a82a37aeb881846fc64df5c48136" Namespace="kube-system" Pod="coredns-674b8bbfcf-nm2rg" WorkloadEndpoint="ci--4081--3--7--d--6a70a4c656-k8s-coredns--674b8bbfcf--nm2rg-eth0" Apr 21 09:59:03.958218 containerd[1482]: 2026-04-21 09:59:03.916 [INFO][4408] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5180b11b23b9c87ab89cb693347b64082e48a82a37aeb881846fc64df5c48136" Namespace="kube-system" Pod="coredns-674b8bbfcf-nm2rg" WorkloadEndpoint="ci--4081--3--7--d--6a70a4c656-k8s-coredns--674b8bbfcf--nm2rg-eth0" Apr 21 09:59:03.958218 containerd[1482]: 2026-04-21 09:59:03.927 [INFO][4408] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5180b11b23b9c87ab89cb693347b64082e48a82a37aeb881846fc64df5c48136" Namespace="kube-system" Pod="coredns-674b8bbfcf-nm2rg" WorkloadEndpoint="ci--4081--3--7--d--6a70a4c656-k8s-coredns--674b8bbfcf--nm2rg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--d--6a70a4c656-k8s-coredns--674b8bbfcf--nm2rg-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"b02e60bd-7bf5-4d67-90f0-2644154eb8f8", ResourceVersion:"980", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 9, 58, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-d-6a70a4c656", ContainerID:"5180b11b23b9c87ab89cb693347b64082e48a82a37aeb881846fc64df5c48136", Pod:"coredns-674b8bbfcf-nm2rg", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.22.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali27bd019bd09", MAC:"be:60:bf:28:47:34", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 09:59:03.958218 containerd[1482]: 2026-04-21 09:59:03.940 [INFO][4408] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5180b11b23b9c87ab89cb693347b64082e48a82a37aeb881846fc64df5c48136" Namespace="kube-system" Pod="coredns-674b8bbfcf-nm2rg" WorkloadEndpoint="ci--4081--3--7--d--6a70a4c656-k8s-coredns--674b8bbfcf--nm2rg-eth0" Apr 21 09:59:03.999660 systemd[1]: Started cri-containerd-971343ef5d394dd99d0af7fd65488b27fe76e68512e6c5075744ec0f274ec141.scope - libcontainer container 971343ef5d394dd99d0af7fd65488b27fe76e68512e6c5075744ec0f274ec141. Apr 21 09:59:04.038611 containerd[1482]: time="2026-04-21T09:59:04.038427420Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-b79cb9f78-rwr6l,Uid:43fc62f9-80c2-420c-b7a3-69da00af4f8f,Namespace:calico-system,Attempt:1,} returns sandbox id \"4b91994e66d61eb26371f9c8c9811d630240aa81013f001ade6da763a479f959\"" Apr 21 09:59:04.043500 containerd[1482]: time="2026-04-21T09:59:04.043453901Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Apr 21 09:59:04.054571 containerd[1482]: time="2026-04-21T09:59:04.050915976Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 09:59:04.054571 containerd[1482]: time="2026-04-21T09:59:04.051000046Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 09:59:04.054571 containerd[1482]: time="2026-04-21T09:59:04.051012290Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 09:59:04.054571 containerd[1482]: time="2026-04-21T09:59:04.051106044Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 09:59:04.088399 systemd[1]: Started cri-containerd-5180b11b23b9c87ab89cb693347b64082e48a82a37aeb881846fc64df5c48136.scope - libcontainer container 5180b11b23b9c87ab89cb693347b64082e48a82a37aeb881846fc64df5c48136. Apr 21 09:59:04.099169 containerd[1482]: time="2026-04-21T09:59:04.098834110Z" level=info msg="StartContainer for \"971343ef5d394dd99d0af7fd65488b27fe76e68512e6c5075744ec0f274ec141\" returns successfully" Apr 21 09:59:04.170327 containerd[1482]: time="2026-04-21T09:59:04.170119900Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-nm2rg,Uid:b02e60bd-7bf5-4d67-90f0-2644154eb8f8,Namespace:kube-system,Attempt:1,} returns sandbox id \"5180b11b23b9c87ab89cb693347b64082e48a82a37aeb881846fc64df5c48136\"" Apr 21 09:59:04.178987 containerd[1482]: time="2026-04-21T09:59:04.178849748Z" level=info msg="CreateContainer within sandbox \"5180b11b23b9c87ab89cb693347b64082e48a82a37aeb881846fc64df5c48136\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 21 09:59:04.194367 containerd[1482]: time="2026-04-21T09:59:04.193598274Z" level=info msg="CreateContainer within sandbox \"5180b11b23b9c87ab89cb693347b64082e48a82a37aeb881846fc64df5c48136\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"04530a8c5003679864b35a16ef3cdd7567d7512d12598a7b33bc4aedaeaf7693\"" Apr 21 09:59:04.195510 containerd[1482]: time="2026-04-21T09:59:04.195269914Z" level=info msg="StartContainer for \"04530a8c5003679864b35a16ef3cdd7567d7512d12598a7b33bc4aedaeaf7693\"" Apr 21 09:59:04.235088 systemd[1]: Started cri-containerd-04530a8c5003679864b35a16ef3cdd7567d7512d12598a7b33bc4aedaeaf7693.scope - libcontainer container 04530a8c5003679864b35a16ef3cdd7567d7512d12598a7b33bc4aedaeaf7693. Apr 21 09:59:04.270027 containerd[1482]: time="2026-04-21T09:59:04.269976689Z" level=info msg="StartContainer for \"04530a8c5003679864b35a16ef3cdd7567d7512d12598a7b33bc4aedaeaf7693\" returns successfully" Apr 21 09:59:04.568282 kubelet[2514]: I0421 09:59:04.567278 2514 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-nm2rg" podStartSLOduration=42.567241112 podStartE2EDuration="42.567241112s" podCreationTimestamp="2026-04-21 09:58:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-21 09:59:04.543741489 +0000 UTC m=+49.417491370" watchObservedRunningTime="2026-04-21 09:59:04.567241112 +0000 UTC m=+49.440990993" Apr 21 09:59:04.568282 kubelet[2514]: I0421 09:59:04.567398 2514 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-598qp" podStartSLOduration=42.567392886 podStartE2EDuration="42.567392886s" podCreationTimestamp="2026-04-21 09:58:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-21 09:59:04.565354036 +0000 UTC m=+49.439103957" watchObservedRunningTime="2026-04-21 09:59:04.567392886 +0000 UTC m=+49.441142767" Apr 21 09:59:04.982197 systemd-networkd[1366]: cali57d6176e888: Gained IPv6LL Apr 21 09:59:05.234377 containerd[1482]: time="2026-04-21T09:59:05.233577777Z" level=info msg="StopPodSandbox for \"b1d887e631030b9f9b8679ed34f334f785cab61d894b2adba1b41f0ca8515e35\"" Apr 21 09:59:05.344246 containerd[1482]: 2026-04-21 09:59:05.293 [INFO][4759] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="b1d887e631030b9f9b8679ed34f334f785cab61d894b2adba1b41f0ca8515e35" Apr 21 09:59:05.344246 containerd[1482]: 2026-04-21 09:59:05.295 [INFO][4759] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b1d887e631030b9f9b8679ed34f334f785cab61d894b2adba1b41f0ca8515e35" iface="eth0" netns="/var/run/netns/cni-4f23bae1-9e91-ea7d-fea2-719efe390e03" Apr 21 09:59:05.344246 containerd[1482]: 2026-04-21 09:59:05.296 [INFO][4759] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b1d887e631030b9f9b8679ed34f334f785cab61d894b2adba1b41f0ca8515e35" iface="eth0" netns="/var/run/netns/cni-4f23bae1-9e91-ea7d-fea2-719efe390e03" Apr 21 09:59:05.344246 containerd[1482]: 2026-04-21 09:59:05.296 [INFO][4759] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="b1d887e631030b9f9b8679ed34f334f785cab61d894b2adba1b41f0ca8515e35" iface="eth0" netns="/var/run/netns/cni-4f23bae1-9e91-ea7d-fea2-719efe390e03" Apr 21 09:59:05.344246 containerd[1482]: 2026-04-21 09:59:05.297 [INFO][4759] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="b1d887e631030b9f9b8679ed34f334f785cab61d894b2adba1b41f0ca8515e35" Apr 21 09:59:05.344246 containerd[1482]: 2026-04-21 09:59:05.297 [INFO][4759] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="b1d887e631030b9f9b8679ed34f334f785cab61d894b2adba1b41f0ca8515e35" Apr 21 09:59:05.344246 containerd[1482]: 2026-04-21 09:59:05.319 [INFO][4767] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="b1d887e631030b9f9b8679ed34f334f785cab61d894b2adba1b41f0ca8515e35" HandleID="k8s-pod-network.b1d887e631030b9f9b8679ed34f334f785cab61d894b2adba1b41f0ca8515e35" Workload="ci--4081--3--7--d--6a70a4c656-k8s-goldmane--5b85766d88--hfxp8-eth0" Apr 21 09:59:05.344246 containerd[1482]: 2026-04-21 09:59:05.319 [INFO][4767] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 09:59:05.344246 containerd[1482]: 2026-04-21 09:59:05.319 [INFO][4767] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 09:59:05.344246 containerd[1482]: 2026-04-21 09:59:05.336 [WARNING][4767] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="b1d887e631030b9f9b8679ed34f334f785cab61d894b2adba1b41f0ca8515e35" HandleID="k8s-pod-network.b1d887e631030b9f9b8679ed34f334f785cab61d894b2adba1b41f0ca8515e35" Workload="ci--4081--3--7--d--6a70a4c656-k8s-goldmane--5b85766d88--hfxp8-eth0" Apr 21 09:59:05.344246 containerd[1482]: 2026-04-21 09:59:05.336 [INFO][4767] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="b1d887e631030b9f9b8679ed34f334f785cab61d894b2adba1b41f0ca8515e35" HandleID="k8s-pod-network.b1d887e631030b9f9b8679ed34f334f785cab61d894b2adba1b41f0ca8515e35" Workload="ci--4081--3--7--d--6a70a4c656-k8s-goldmane--5b85766d88--hfxp8-eth0" Apr 21 09:59:05.344246 containerd[1482]: 2026-04-21 09:59:05.338 [INFO][4767] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 09:59:05.344246 containerd[1482]: 2026-04-21 09:59:05.342 [INFO][4759] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="b1d887e631030b9f9b8679ed34f334f785cab61d894b2adba1b41f0ca8515e35" Apr 21 09:59:05.345529 containerd[1482]: time="2026-04-21T09:59:05.344490694Z" level=info msg="TearDown network for sandbox \"b1d887e631030b9f9b8679ed34f334f785cab61d894b2adba1b41f0ca8515e35\" successfully" Apr 21 09:59:05.345529 containerd[1482]: time="2026-04-21T09:59:05.344530547Z" level=info msg="StopPodSandbox for \"b1d887e631030b9f9b8679ed34f334f785cab61d894b2adba1b41f0ca8515e35\" returns successfully" Apr 21 09:59:05.345529 containerd[1482]: time="2026-04-21T09:59:05.345256394Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-hfxp8,Uid:cb94aaf5-d4ea-43a3-a742-62679734ea77,Namespace:calico-system,Attempt:1,}" Apr 21 09:59:05.350488 systemd[1]: run-netns-cni\x2d4f23bae1\x2d9e91\x2dea7d\x2dfea2\x2d719efe390e03.mount: Deactivated successfully. Apr 21 09:59:05.430960 systemd-networkd[1366]: cali860362aca7a: Gained IPv6LL Apr 21 09:59:05.556905 systemd-networkd[1366]: cali29d65add248: Link UP Apr 21 09:59:05.559017 systemd-networkd[1366]: cali29d65add248: Gained carrier Apr 21 09:59:05.593981 containerd[1482]: 2026-04-21 09:59:05.411 [INFO][4774] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--7--d--6a70a4c656-k8s-goldmane--5b85766d88--hfxp8-eth0 goldmane-5b85766d88- calico-system cb94aaf5-d4ea-43a3-a742-62679734ea77 1018 0 2026-04-21 09:58:36 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:5b85766d88 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4081-3-7-d-6a70a4c656 goldmane-5b85766d88-hfxp8 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali29d65add248 [] [] }} ContainerID="b4500745d7a09a8dca5eba4a7392fc75689cce9e76ab197aa0f4b7393f18cd2d" Namespace="calico-system" Pod="goldmane-5b85766d88-hfxp8" WorkloadEndpoint="ci--4081--3--7--d--6a70a4c656-k8s-goldmane--5b85766d88--hfxp8-" Apr 21 09:59:05.593981 containerd[1482]: 2026-04-21 09:59:05.411 [INFO][4774] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b4500745d7a09a8dca5eba4a7392fc75689cce9e76ab197aa0f4b7393f18cd2d" Namespace="calico-system" Pod="goldmane-5b85766d88-hfxp8" WorkloadEndpoint="ci--4081--3--7--d--6a70a4c656-k8s-goldmane--5b85766d88--hfxp8-eth0" Apr 21 09:59:05.593981 containerd[1482]: 2026-04-21 09:59:05.474 [INFO][4786] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b4500745d7a09a8dca5eba4a7392fc75689cce9e76ab197aa0f4b7393f18cd2d" HandleID="k8s-pod-network.b4500745d7a09a8dca5eba4a7392fc75689cce9e76ab197aa0f4b7393f18cd2d" Workload="ci--4081--3--7--d--6a70a4c656-k8s-goldmane--5b85766d88--hfxp8-eth0" Apr 21 09:59:05.593981 containerd[1482]: 2026-04-21 09:59:05.490 [INFO][4786] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="b4500745d7a09a8dca5eba4a7392fc75689cce9e76ab197aa0f4b7393f18cd2d" HandleID="k8s-pod-network.b4500745d7a09a8dca5eba4a7392fc75689cce9e76ab197aa0f4b7393f18cd2d" Workload="ci--4081--3--7--d--6a70a4c656-k8s-goldmane--5b85766d88--hfxp8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000273e50), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-7-d-6a70a4c656", "pod":"goldmane-5b85766d88-hfxp8", "timestamp":"2026-04-21 09:59:05.474725494 +0000 UTC"}, Hostname:"ci-4081-3-7-d-6a70a4c656", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40003ab1e0)} Apr 21 09:59:05.593981 containerd[1482]: 2026-04-21 09:59:05.490 [INFO][4786] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 09:59:05.593981 containerd[1482]: 2026-04-21 09:59:05.490 [INFO][4786] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 09:59:05.593981 containerd[1482]: 2026-04-21 09:59:05.490 [INFO][4786] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-7-d-6a70a4c656' Apr 21 09:59:05.593981 containerd[1482]: 2026-04-21 09:59:05.494 [INFO][4786] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.b4500745d7a09a8dca5eba4a7392fc75689cce9e76ab197aa0f4b7393f18cd2d" host="ci-4081-3-7-d-6a70a4c656" Apr 21 09:59:05.593981 containerd[1482]: 2026-04-21 09:59:05.502 [INFO][4786] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081-3-7-d-6a70a4c656" Apr 21 09:59:05.593981 containerd[1482]: 2026-04-21 09:59:05.508 [INFO][4786] ipam/ipam.go 526: Trying affinity for 192.168.22.192/26 host="ci-4081-3-7-d-6a70a4c656" Apr 21 09:59:05.593981 containerd[1482]: 2026-04-21 09:59:05.510 [INFO][4786] ipam/ipam.go 160: Attempting to load block cidr=192.168.22.192/26 host="ci-4081-3-7-d-6a70a4c656" Apr 21 09:59:05.593981 containerd[1482]: 2026-04-21 09:59:05.513 [INFO][4786] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.22.192/26 host="ci-4081-3-7-d-6a70a4c656" Apr 21 09:59:05.593981 containerd[1482]: 2026-04-21 09:59:05.513 [INFO][4786] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.22.192/26 handle="k8s-pod-network.b4500745d7a09a8dca5eba4a7392fc75689cce9e76ab197aa0f4b7393f18cd2d" host="ci-4081-3-7-d-6a70a4c656" Apr 21 09:59:05.593981 containerd[1482]: 2026-04-21 09:59:05.518 [INFO][4786] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.b4500745d7a09a8dca5eba4a7392fc75689cce9e76ab197aa0f4b7393f18cd2d Apr 21 09:59:05.593981 containerd[1482]: 2026-04-21 09:59:05.525 [INFO][4786] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.22.192/26 handle="k8s-pod-network.b4500745d7a09a8dca5eba4a7392fc75689cce9e76ab197aa0f4b7393f18cd2d" host="ci-4081-3-7-d-6a70a4c656" Apr 21 09:59:05.593981 containerd[1482]: 2026-04-21 09:59:05.538 [INFO][4786] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.22.198/26] block=192.168.22.192/26 handle="k8s-pod-network.b4500745d7a09a8dca5eba4a7392fc75689cce9e76ab197aa0f4b7393f18cd2d" host="ci-4081-3-7-d-6a70a4c656" Apr 21 09:59:05.593981 containerd[1482]: 2026-04-21 09:59:05.538 [INFO][4786] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.22.198/26] handle="k8s-pod-network.b4500745d7a09a8dca5eba4a7392fc75689cce9e76ab197aa0f4b7393f18cd2d" host="ci-4081-3-7-d-6a70a4c656" Apr 21 09:59:05.593981 containerd[1482]: 2026-04-21 09:59:05.538 [INFO][4786] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 09:59:05.593981 containerd[1482]: 2026-04-21 09:59:05.538 [INFO][4786] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.22.198/26] IPv6=[] ContainerID="b4500745d7a09a8dca5eba4a7392fc75689cce9e76ab197aa0f4b7393f18cd2d" HandleID="k8s-pod-network.b4500745d7a09a8dca5eba4a7392fc75689cce9e76ab197aa0f4b7393f18cd2d" Workload="ci--4081--3--7--d--6a70a4c656-k8s-goldmane--5b85766d88--hfxp8-eth0" Apr 21 09:59:05.594800 containerd[1482]: 2026-04-21 09:59:05.544 [INFO][4774] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b4500745d7a09a8dca5eba4a7392fc75689cce9e76ab197aa0f4b7393f18cd2d" Namespace="calico-system" Pod="goldmane-5b85766d88-hfxp8" WorkloadEndpoint="ci--4081--3--7--d--6a70a4c656-k8s-goldmane--5b85766d88--hfxp8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--d--6a70a4c656-k8s-goldmane--5b85766d88--hfxp8-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"cb94aaf5-d4ea-43a3-a742-62679734ea77", ResourceVersion:"1018", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 9, 58, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-d-6a70a4c656", ContainerID:"", Pod:"goldmane-5b85766d88-hfxp8", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.22.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali29d65add248", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 09:59:05.594800 containerd[1482]: 2026-04-21 09:59:05.544 [INFO][4774] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.22.198/32] ContainerID="b4500745d7a09a8dca5eba4a7392fc75689cce9e76ab197aa0f4b7393f18cd2d" Namespace="calico-system" Pod="goldmane-5b85766d88-hfxp8" WorkloadEndpoint="ci--4081--3--7--d--6a70a4c656-k8s-goldmane--5b85766d88--hfxp8-eth0" Apr 21 09:59:05.594800 containerd[1482]: 2026-04-21 09:59:05.544 [INFO][4774] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali29d65add248 ContainerID="b4500745d7a09a8dca5eba4a7392fc75689cce9e76ab197aa0f4b7393f18cd2d" Namespace="calico-system" Pod="goldmane-5b85766d88-hfxp8" WorkloadEndpoint="ci--4081--3--7--d--6a70a4c656-k8s-goldmane--5b85766d88--hfxp8-eth0" Apr 21 09:59:05.594800 containerd[1482]: 2026-04-21 09:59:05.559 [INFO][4774] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b4500745d7a09a8dca5eba4a7392fc75689cce9e76ab197aa0f4b7393f18cd2d" Namespace="calico-system" Pod="goldmane-5b85766d88-hfxp8" WorkloadEndpoint="ci--4081--3--7--d--6a70a4c656-k8s-goldmane--5b85766d88--hfxp8-eth0" Apr 21 09:59:05.594800 containerd[1482]: 2026-04-21 09:59:05.559 [INFO][4774] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b4500745d7a09a8dca5eba4a7392fc75689cce9e76ab197aa0f4b7393f18cd2d" Namespace="calico-system" Pod="goldmane-5b85766d88-hfxp8" WorkloadEndpoint="ci--4081--3--7--d--6a70a4c656-k8s-goldmane--5b85766d88--hfxp8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--d--6a70a4c656-k8s-goldmane--5b85766d88--hfxp8-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"cb94aaf5-d4ea-43a3-a742-62679734ea77", ResourceVersion:"1018", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 9, 58, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-d-6a70a4c656", ContainerID:"b4500745d7a09a8dca5eba4a7392fc75689cce9e76ab197aa0f4b7393f18cd2d", Pod:"goldmane-5b85766d88-hfxp8", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.22.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali29d65add248", MAC:"3a:02:d9:d2:61:97", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 09:59:05.594800 containerd[1482]: 2026-04-21 09:59:05.581 [INFO][4774] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b4500745d7a09a8dca5eba4a7392fc75689cce9e76ab197aa0f4b7393f18cd2d" Namespace="calico-system" Pod="goldmane-5b85766d88-hfxp8" WorkloadEndpoint="ci--4081--3--7--d--6a70a4c656-k8s-goldmane--5b85766d88--hfxp8-eth0" Apr 21 09:59:05.678684 containerd[1482]: time="2026-04-21T09:59:05.664220104Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 09:59:05.678684 containerd[1482]: time="2026-04-21T09:59:05.664330181Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 09:59:05.678684 containerd[1482]: time="2026-04-21T09:59:05.664374276Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 09:59:05.678684 containerd[1482]: time="2026-04-21T09:59:05.664503960Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 09:59:05.708072 systemd[1]: Started cri-containerd-b4500745d7a09a8dca5eba4a7392fc75689cce9e76ab197aa0f4b7393f18cd2d.scope - libcontainer container b4500745d7a09a8dca5eba4a7392fc75689cce9e76ab197aa0f4b7393f18cd2d. Apr 21 09:59:05.762310 containerd[1482]: time="2026-04-21T09:59:05.762259567Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-hfxp8,Uid:cb94aaf5-d4ea-43a3-a742-62679734ea77,Namespace:calico-system,Attempt:1,} returns sandbox id \"b4500745d7a09a8dca5eba4a7392fc75689cce9e76ab197aa0f4b7393f18cd2d\"" Apr 21 09:59:05.878263 systemd-networkd[1366]: cali27bd019bd09: Gained IPv6LL Apr 21 09:59:06.710866 systemd-networkd[1366]: cali29d65add248: Gained IPv6LL Apr 21 09:59:07.233175 containerd[1482]: time="2026-04-21T09:59:07.233124347Z" level=info msg="StopPodSandbox for \"d50ced9d536f67c6cec3a4c3bf4e9e3fbacbdfe0dc7f764a964d798a44792183\"" Apr 21 09:59:07.374410 containerd[1482]: 2026-04-21 09:59:07.311 [INFO][4885] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="d50ced9d536f67c6cec3a4c3bf4e9e3fbacbdfe0dc7f764a964d798a44792183" Apr 21 09:59:07.374410 containerd[1482]: 2026-04-21 09:59:07.312 [INFO][4885] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="d50ced9d536f67c6cec3a4c3bf4e9e3fbacbdfe0dc7f764a964d798a44792183" iface="eth0" netns="/var/run/netns/cni-7d039b3c-821d-4697-698a-9d9fcbaf7e8a" Apr 21 09:59:07.374410 containerd[1482]: 2026-04-21 09:59:07.312 [INFO][4885] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="d50ced9d536f67c6cec3a4c3bf4e9e3fbacbdfe0dc7f764a964d798a44792183" iface="eth0" netns="/var/run/netns/cni-7d039b3c-821d-4697-698a-9d9fcbaf7e8a" Apr 21 09:59:07.374410 containerd[1482]: 2026-04-21 09:59:07.315 [INFO][4885] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="d50ced9d536f67c6cec3a4c3bf4e9e3fbacbdfe0dc7f764a964d798a44792183" iface="eth0" netns="/var/run/netns/cni-7d039b3c-821d-4697-698a-9d9fcbaf7e8a" Apr 21 09:59:07.374410 containerd[1482]: 2026-04-21 09:59:07.315 [INFO][4885] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="d50ced9d536f67c6cec3a4c3bf4e9e3fbacbdfe0dc7f764a964d798a44792183" Apr 21 09:59:07.374410 containerd[1482]: 2026-04-21 09:59:07.315 [INFO][4885] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="d50ced9d536f67c6cec3a4c3bf4e9e3fbacbdfe0dc7f764a964d798a44792183" Apr 21 09:59:07.374410 containerd[1482]: 2026-04-21 09:59:07.346 [INFO][4892] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="d50ced9d536f67c6cec3a4c3bf4e9e3fbacbdfe0dc7f764a964d798a44792183" HandleID="k8s-pod-network.d50ced9d536f67c6cec3a4c3bf4e9e3fbacbdfe0dc7f764a964d798a44792183" Workload="ci--4081--3--7--d--6a70a4c656-k8s-calico--kube--controllers--cfd49cd7d--twdnt-eth0" Apr 21 09:59:07.374410 containerd[1482]: 2026-04-21 09:59:07.347 [INFO][4892] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 09:59:07.374410 containerd[1482]: 2026-04-21 09:59:07.347 [INFO][4892] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 09:59:07.374410 containerd[1482]: 2026-04-21 09:59:07.362 [WARNING][4892] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="d50ced9d536f67c6cec3a4c3bf4e9e3fbacbdfe0dc7f764a964d798a44792183" HandleID="k8s-pod-network.d50ced9d536f67c6cec3a4c3bf4e9e3fbacbdfe0dc7f764a964d798a44792183" Workload="ci--4081--3--7--d--6a70a4c656-k8s-calico--kube--controllers--cfd49cd7d--twdnt-eth0" Apr 21 09:59:07.374410 containerd[1482]: 2026-04-21 09:59:07.362 [INFO][4892] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="d50ced9d536f67c6cec3a4c3bf4e9e3fbacbdfe0dc7f764a964d798a44792183" HandleID="k8s-pod-network.d50ced9d536f67c6cec3a4c3bf4e9e3fbacbdfe0dc7f764a964d798a44792183" Workload="ci--4081--3--7--d--6a70a4c656-k8s-calico--kube--controllers--cfd49cd7d--twdnt-eth0" Apr 21 09:59:07.374410 containerd[1482]: 2026-04-21 09:59:07.365 [INFO][4892] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 09:59:07.374410 containerd[1482]: 2026-04-21 09:59:07.368 [INFO][4885] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="d50ced9d536f67c6cec3a4c3bf4e9e3fbacbdfe0dc7f764a964d798a44792183" Apr 21 09:59:07.374410 containerd[1482]: time="2026-04-21T09:59:07.374449829Z" level=info msg="TearDown network for sandbox \"d50ced9d536f67c6cec3a4c3bf4e9e3fbacbdfe0dc7f764a964d798a44792183\" successfully" Apr 21 09:59:07.374410 containerd[1482]: time="2026-04-21T09:59:07.374479638Z" level=info msg="StopPodSandbox for \"d50ced9d536f67c6cec3a4c3bf4e9e3fbacbdfe0dc7f764a964d798a44792183\" returns successfully" Apr 21 09:59:07.379341 containerd[1482]: time="2026-04-21T09:59:07.378791996Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-cfd49cd7d-twdnt,Uid:076dd66b-5522-4ecd-a955-84e896c699d3,Namespace:calico-system,Attempt:1,}" Apr 21 09:59:07.379601 systemd[1]: run-netns-cni\x2d7d039b3c\x2d821d\x2d4697\x2d698a\x2d9d9fcbaf7e8a.mount: Deactivated successfully. Apr 21 09:59:07.580072 systemd-networkd[1366]: cali49d3852e823: Link UP Apr 21 09:59:07.580538 systemd-networkd[1366]: cali49d3852e823: Gained carrier Apr 21 09:59:07.607090 containerd[1482]: 2026-04-21 09:59:07.450 [INFO][4900] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--7--d--6a70a4c656-k8s-calico--kube--controllers--cfd49cd7d--twdnt-eth0 calico-kube-controllers-cfd49cd7d- calico-system 076dd66b-5522-4ecd-a955-84e896c699d3 1034 0 2026-04-21 09:58:37 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:cfd49cd7d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081-3-7-d-6a70a4c656 calico-kube-controllers-cfd49cd7d-twdnt eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali49d3852e823 [] [] }} ContainerID="19100130d0bc745e092d447c5e0bff3bc54fb00a412dec54e7a25f310624f358" Namespace="calico-system" Pod="calico-kube-controllers-cfd49cd7d-twdnt" WorkloadEndpoint="ci--4081--3--7--d--6a70a4c656-k8s-calico--kube--controllers--cfd49cd7d--twdnt-" Apr 21 09:59:07.607090 containerd[1482]: 2026-04-21 09:59:07.450 [INFO][4900] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="19100130d0bc745e092d447c5e0bff3bc54fb00a412dec54e7a25f310624f358" Namespace="calico-system" Pod="calico-kube-controllers-cfd49cd7d-twdnt" WorkloadEndpoint="ci--4081--3--7--d--6a70a4c656-k8s-calico--kube--controllers--cfd49cd7d--twdnt-eth0" Apr 21 09:59:07.607090 containerd[1482]: 2026-04-21 09:59:07.504 [INFO][4911] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="19100130d0bc745e092d447c5e0bff3bc54fb00a412dec54e7a25f310624f358" HandleID="k8s-pod-network.19100130d0bc745e092d447c5e0bff3bc54fb00a412dec54e7a25f310624f358" Workload="ci--4081--3--7--d--6a70a4c656-k8s-calico--kube--controllers--cfd49cd7d--twdnt-eth0" Apr 21 09:59:07.607090 containerd[1482]: 2026-04-21 09:59:07.521 [INFO][4911] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="19100130d0bc745e092d447c5e0bff3bc54fb00a412dec54e7a25f310624f358" HandleID="k8s-pod-network.19100130d0bc745e092d447c5e0bff3bc54fb00a412dec54e7a25f310624f358" Workload="ci--4081--3--7--d--6a70a4c656-k8s-calico--kube--controllers--cfd49cd7d--twdnt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002f9e80), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-7-d-6a70a4c656", "pod":"calico-kube-controllers-cfd49cd7d-twdnt", "timestamp":"2026-04-21 09:59:07.504246867 +0000 UTC"}, Hostname:"ci-4081-3-7-d-6a70a4c656", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40001866e0)} Apr 21 09:59:07.607090 containerd[1482]: 2026-04-21 09:59:07.522 [INFO][4911] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 09:59:07.607090 containerd[1482]: 2026-04-21 09:59:07.522 [INFO][4911] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 09:59:07.607090 containerd[1482]: 2026-04-21 09:59:07.522 [INFO][4911] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-7-d-6a70a4c656' Apr 21 09:59:07.607090 containerd[1482]: 2026-04-21 09:59:07.526 [INFO][4911] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.19100130d0bc745e092d447c5e0bff3bc54fb00a412dec54e7a25f310624f358" host="ci-4081-3-7-d-6a70a4c656" Apr 21 09:59:07.607090 containerd[1482]: 2026-04-21 09:59:07.533 [INFO][4911] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081-3-7-d-6a70a4c656" Apr 21 09:59:07.607090 containerd[1482]: 2026-04-21 09:59:07.539 [INFO][4911] ipam/ipam.go 526: Trying affinity for 192.168.22.192/26 host="ci-4081-3-7-d-6a70a4c656" Apr 21 09:59:07.607090 containerd[1482]: 2026-04-21 09:59:07.543 [INFO][4911] ipam/ipam.go 160: Attempting to load block cidr=192.168.22.192/26 host="ci-4081-3-7-d-6a70a4c656" Apr 21 09:59:07.607090 containerd[1482]: 2026-04-21 09:59:07.548 [INFO][4911] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.22.192/26 host="ci-4081-3-7-d-6a70a4c656" Apr 21 09:59:07.607090 containerd[1482]: 2026-04-21 09:59:07.548 [INFO][4911] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.22.192/26 handle="k8s-pod-network.19100130d0bc745e092d447c5e0bff3bc54fb00a412dec54e7a25f310624f358" host="ci-4081-3-7-d-6a70a4c656" Apr 21 09:59:07.607090 containerd[1482]: 2026-04-21 09:59:07.553 [INFO][4911] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.19100130d0bc745e092d447c5e0bff3bc54fb00a412dec54e7a25f310624f358 Apr 21 09:59:07.607090 containerd[1482]: 2026-04-21 09:59:07.562 [INFO][4911] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.22.192/26 handle="k8s-pod-network.19100130d0bc745e092d447c5e0bff3bc54fb00a412dec54e7a25f310624f358" host="ci-4081-3-7-d-6a70a4c656" Apr 21 09:59:07.607090 containerd[1482]: 2026-04-21 09:59:07.570 [INFO][4911] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.22.199/26] block=192.168.22.192/26 handle="k8s-pod-network.19100130d0bc745e092d447c5e0bff3bc54fb00a412dec54e7a25f310624f358" host="ci-4081-3-7-d-6a70a4c656" Apr 21 09:59:07.607090 containerd[1482]: 2026-04-21 09:59:07.570 [INFO][4911] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.22.199/26] handle="k8s-pod-network.19100130d0bc745e092d447c5e0bff3bc54fb00a412dec54e7a25f310624f358" host="ci-4081-3-7-d-6a70a4c656" Apr 21 09:59:07.607090 containerd[1482]: 2026-04-21 09:59:07.570 [INFO][4911] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 09:59:07.607090 containerd[1482]: 2026-04-21 09:59:07.570 [INFO][4911] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.22.199/26] IPv6=[] ContainerID="19100130d0bc745e092d447c5e0bff3bc54fb00a412dec54e7a25f310624f358" HandleID="k8s-pod-network.19100130d0bc745e092d447c5e0bff3bc54fb00a412dec54e7a25f310624f358" Workload="ci--4081--3--7--d--6a70a4c656-k8s-calico--kube--controllers--cfd49cd7d--twdnt-eth0" Apr 21 09:59:07.609274 containerd[1482]: 2026-04-21 09:59:07.574 [INFO][4900] cni-plugin/k8s.go 418: Populated endpoint ContainerID="19100130d0bc745e092d447c5e0bff3bc54fb00a412dec54e7a25f310624f358" Namespace="calico-system" Pod="calico-kube-controllers-cfd49cd7d-twdnt" WorkloadEndpoint="ci--4081--3--7--d--6a70a4c656-k8s-calico--kube--controllers--cfd49cd7d--twdnt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--d--6a70a4c656-k8s-calico--kube--controllers--cfd49cd7d--twdnt-eth0", GenerateName:"calico-kube-controllers-cfd49cd7d-", Namespace:"calico-system", SelfLink:"", UID:"076dd66b-5522-4ecd-a955-84e896c699d3", ResourceVersion:"1034", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 9, 58, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"cfd49cd7d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-d-6a70a4c656", ContainerID:"", Pod:"calico-kube-controllers-cfd49cd7d-twdnt", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.22.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali49d3852e823", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 09:59:07.609274 containerd[1482]: 2026-04-21 09:59:07.574 [INFO][4900] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.22.199/32] ContainerID="19100130d0bc745e092d447c5e0bff3bc54fb00a412dec54e7a25f310624f358" Namespace="calico-system" Pod="calico-kube-controllers-cfd49cd7d-twdnt" WorkloadEndpoint="ci--4081--3--7--d--6a70a4c656-k8s-calico--kube--controllers--cfd49cd7d--twdnt-eth0" Apr 21 09:59:07.609274 containerd[1482]: 2026-04-21 09:59:07.574 [INFO][4900] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali49d3852e823 ContainerID="19100130d0bc745e092d447c5e0bff3bc54fb00a412dec54e7a25f310624f358" Namespace="calico-system" Pod="calico-kube-controllers-cfd49cd7d-twdnt" WorkloadEndpoint="ci--4081--3--7--d--6a70a4c656-k8s-calico--kube--controllers--cfd49cd7d--twdnt-eth0" Apr 21 09:59:07.609274 containerd[1482]: 2026-04-21 09:59:07.583 [INFO][4900] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="19100130d0bc745e092d447c5e0bff3bc54fb00a412dec54e7a25f310624f358" Namespace="calico-system" Pod="calico-kube-controllers-cfd49cd7d-twdnt" WorkloadEndpoint="ci--4081--3--7--d--6a70a4c656-k8s-calico--kube--controllers--cfd49cd7d--twdnt-eth0" Apr 21 09:59:07.609274 containerd[1482]: 2026-04-21 09:59:07.584 [INFO][4900] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="19100130d0bc745e092d447c5e0bff3bc54fb00a412dec54e7a25f310624f358" Namespace="calico-system" Pod="calico-kube-controllers-cfd49cd7d-twdnt" WorkloadEndpoint="ci--4081--3--7--d--6a70a4c656-k8s-calico--kube--controllers--cfd49cd7d--twdnt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--d--6a70a4c656-k8s-calico--kube--controllers--cfd49cd7d--twdnt-eth0", GenerateName:"calico-kube-controllers-cfd49cd7d-", Namespace:"calico-system", SelfLink:"", UID:"076dd66b-5522-4ecd-a955-84e896c699d3", ResourceVersion:"1034", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 9, 58, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"cfd49cd7d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-d-6a70a4c656", ContainerID:"19100130d0bc745e092d447c5e0bff3bc54fb00a412dec54e7a25f310624f358", Pod:"calico-kube-controllers-cfd49cd7d-twdnt", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.22.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali49d3852e823", MAC:"3e:7f:f1:75:45:e9", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 09:59:07.609274 containerd[1482]: 2026-04-21 09:59:07.601 [INFO][4900] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="19100130d0bc745e092d447c5e0bff3bc54fb00a412dec54e7a25f310624f358" Namespace="calico-system" Pod="calico-kube-controllers-cfd49cd7d-twdnt" WorkloadEndpoint="ci--4081--3--7--d--6a70a4c656-k8s-calico--kube--controllers--cfd49cd7d--twdnt-eth0" Apr 21 09:59:07.652584 containerd[1482]: time="2026-04-21T09:59:07.652224983Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 09:59:07.652584 containerd[1482]: time="2026-04-21T09:59:07.652294084Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 09:59:07.652584 containerd[1482]: time="2026-04-21T09:59:07.652317331Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 09:59:07.653548 containerd[1482]: time="2026-04-21T09:59:07.652432006Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 09:59:07.698024 systemd[1]: Started cri-containerd-19100130d0bc745e092d447c5e0bff3bc54fb00a412dec54e7a25f310624f358.scope - libcontainer container 19100130d0bc745e092d447c5e0bff3bc54fb00a412dec54e7a25f310624f358. Apr 21 09:59:07.787024 containerd[1482]: time="2026-04-21T09:59:07.786888429Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-cfd49cd7d-twdnt,Uid:076dd66b-5522-4ecd-a955-84e896c699d3,Namespace:calico-system,Attempt:1,} returns sandbox id \"19100130d0bc745e092d447c5e0bff3bc54fb00a412dec54e7a25f310624f358\"" Apr 21 09:59:07.821867 containerd[1482]: time="2026-04-21T09:59:07.821739562Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 09:59:07.824440 containerd[1482]: time="2026-04-21T09:59:07.823905304Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=45552315" Apr 21 09:59:07.828159 containerd[1482]: time="2026-04-21T09:59:07.828116712Z" level=info msg="ImageCreate event name:\"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 09:59:07.833653 containerd[1482]: time="2026-04-21T09:59:07.833497717Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 09:59:07.838126 containerd[1482]: time="2026-04-21T09:59:07.834295120Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"46949856\" in 3.790798644s" Apr 21 09:59:07.838126 containerd[1482]: time="2026-04-21T09:59:07.834336973Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\"" Apr 21 09:59:07.838126 containerd[1482]: time="2026-04-21T09:59:07.837831722Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\"" Apr 21 09:59:07.843297 containerd[1482]: time="2026-04-21T09:59:07.843252739Z" level=info msg="CreateContainer within sandbox \"4b91994e66d61eb26371f9c8c9811d630240aa81013f001ade6da763a479f959\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 21 09:59:07.873585 containerd[1482]: time="2026-04-21T09:59:07.873532915Z" level=info msg="CreateContainer within sandbox \"4b91994e66d61eb26371f9c8c9811d630240aa81013f001ade6da763a479f959\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"fa58b61f65c15f320334fbee80047e67629bc4fac066629bf1605abb60b4753c\"" Apr 21 09:59:07.874512 containerd[1482]: time="2026-04-21T09:59:07.874480525Z" level=info msg="StartContainer for \"fa58b61f65c15f320334fbee80047e67629bc4fac066629bf1605abb60b4753c\"" Apr 21 09:59:07.902015 systemd[1]: Started cri-containerd-fa58b61f65c15f320334fbee80047e67629bc4fac066629bf1605abb60b4753c.scope - libcontainer container fa58b61f65c15f320334fbee80047e67629bc4fac066629bf1605abb60b4753c. Apr 21 09:59:07.955746 containerd[1482]: time="2026-04-21T09:59:07.955699193Z" level=info msg="StartContainer for \"fa58b61f65c15f320334fbee80047e67629bc4fac066629bf1605abb60b4753c\" returns successfully" Apr 21 09:59:08.231724 containerd[1482]: time="2026-04-21T09:59:08.231389593Z" level=info msg="StopPodSandbox for \"b118c496459eb514921cd9a74eb129d901e646ba194b375b30ab9fd23c588678\"" Apr 21 09:59:08.380626 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3495035209.mount: Deactivated successfully. Apr 21 09:59:08.400916 containerd[1482]: 2026-04-21 09:59:08.335 [INFO][5036] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="b118c496459eb514921cd9a74eb129d901e646ba194b375b30ab9fd23c588678" Apr 21 09:59:08.400916 containerd[1482]: 2026-04-21 09:59:08.335 [INFO][5036] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b118c496459eb514921cd9a74eb129d901e646ba194b375b30ab9fd23c588678" iface="eth0" netns="/var/run/netns/cni-98e2063b-bd0d-3011-d98a-1dc803ca998c" Apr 21 09:59:08.400916 containerd[1482]: 2026-04-21 09:59:08.335 [INFO][5036] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b118c496459eb514921cd9a74eb129d901e646ba194b375b30ab9fd23c588678" iface="eth0" netns="/var/run/netns/cni-98e2063b-bd0d-3011-d98a-1dc803ca998c" Apr 21 09:59:08.400916 containerd[1482]: 2026-04-21 09:59:08.335 [INFO][5036] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="b118c496459eb514921cd9a74eb129d901e646ba194b375b30ab9fd23c588678" iface="eth0" netns="/var/run/netns/cni-98e2063b-bd0d-3011-d98a-1dc803ca998c" Apr 21 09:59:08.400916 containerd[1482]: 2026-04-21 09:59:08.335 [INFO][5036] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="b118c496459eb514921cd9a74eb129d901e646ba194b375b30ab9fd23c588678" Apr 21 09:59:08.400916 containerd[1482]: 2026-04-21 09:59:08.335 [INFO][5036] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="b118c496459eb514921cd9a74eb129d901e646ba194b375b30ab9fd23c588678" Apr 21 09:59:08.400916 containerd[1482]: 2026-04-21 09:59:08.372 [INFO][5044] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="b118c496459eb514921cd9a74eb129d901e646ba194b375b30ab9fd23c588678" HandleID="k8s-pod-network.b118c496459eb514921cd9a74eb129d901e646ba194b375b30ab9fd23c588678" Workload="ci--4081--3--7--d--6a70a4c656-k8s-calico--apiserver--b79cb9f78--7tssf-eth0" Apr 21 09:59:08.400916 containerd[1482]: 2026-04-21 09:59:08.372 [INFO][5044] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 09:59:08.400916 containerd[1482]: 2026-04-21 09:59:08.372 [INFO][5044] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 09:59:08.400916 containerd[1482]: 2026-04-21 09:59:08.392 [WARNING][5044] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="b118c496459eb514921cd9a74eb129d901e646ba194b375b30ab9fd23c588678" HandleID="k8s-pod-network.b118c496459eb514921cd9a74eb129d901e646ba194b375b30ab9fd23c588678" Workload="ci--4081--3--7--d--6a70a4c656-k8s-calico--apiserver--b79cb9f78--7tssf-eth0" Apr 21 09:59:08.400916 containerd[1482]: 2026-04-21 09:59:08.392 [INFO][5044] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="b118c496459eb514921cd9a74eb129d901e646ba194b375b30ab9fd23c588678" HandleID="k8s-pod-network.b118c496459eb514921cd9a74eb129d901e646ba194b375b30ab9fd23c588678" Workload="ci--4081--3--7--d--6a70a4c656-k8s-calico--apiserver--b79cb9f78--7tssf-eth0" Apr 21 09:59:08.400916 containerd[1482]: 2026-04-21 09:59:08.394 [INFO][5044] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 09:59:08.400916 containerd[1482]: 2026-04-21 09:59:08.398 [INFO][5036] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="b118c496459eb514921cd9a74eb129d901e646ba194b375b30ab9fd23c588678" Apr 21 09:59:08.403589 containerd[1482]: time="2026-04-21T09:59:08.401781773Z" level=info msg="TearDown network for sandbox \"b118c496459eb514921cd9a74eb129d901e646ba194b375b30ab9fd23c588678\" successfully" Apr 21 09:59:08.403589 containerd[1482]: time="2026-04-21T09:59:08.401811061Z" level=info msg="StopPodSandbox for \"b118c496459eb514921cd9a74eb129d901e646ba194b375b30ab9fd23c588678\" returns successfully" Apr 21 09:59:08.403589 containerd[1482]: time="2026-04-21T09:59:08.403443375Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-b79cb9f78-7tssf,Uid:6c6d8cb6-7cee-40f4-a210-b9a1efc80ea9,Namespace:calico-system,Attempt:1,}" Apr 21 09:59:08.411434 systemd[1]: run-netns-cni\x2d98e2063b\x2dbd0d\x2d3011\x2dd98a\x2d1dc803ca998c.mount: Deactivated successfully. Apr 21 09:59:08.614059 systemd-networkd[1366]: califa3ebdf6e83: Link UP Apr 21 09:59:08.614920 systemd-networkd[1366]: califa3ebdf6e83: Gained carrier Apr 21 09:59:08.628676 kubelet[2514]: I0421 09:59:08.628603 2514 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-b79cb9f78-rwr6l" podStartSLOduration=28.835838814 podStartE2EDuration="32.628587207s" podCreationTimestamp="2026-04-21 09:58:36 +0000 UTC" firstStartedPulling="2026-04-21 09:59:04.042874013 +0000 UTC m=+48.916623854" lastFinishedPulling="2026-04-21 09:59:07.835622366 +0000 UTC m=+52.709372247" observedRunningTime="2026-04-21 09:59:08.582243875 +0000 UTC m=+53.455993756" watchObservedRunningTime="2026-04-21 09:59:08.628587207 +0000 UTC m=+53.502337048" Apr 21 09:59:08.636388 containerd[1482]: 2026-04-21 09:59:08.481 [INFO][5051] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--7--d--6a70a4c656-k8s-calico--apiserver--b79cb9f78--7tssf-eth0 calico-apiserver-b79cb9f78- calico-system 6c6d8cb6-7cee-40f4-a210-b9a1efc80ea9 1046 0 2026-04-21 09:58:36 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:b79cb9f78 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-7-d-6a70a4c656 calico-apiserver-b79cb9f78-7tssf eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] califa3ebdf6e83 [] [] }} ContainerID="2b01612c1a7db10b551de185e9572d9dbb3b74d5137632bbf086b30b62019747" Namespace="calico-system" Pod="calico-apiserver-b79cb9f78-7tssf" WorkloadEndpoint="ci--4081--3--7--d--6a70a4c656-k8s-calico--apiserver--b79cb9f78--7tssf-" Apr 21 09:59:08.636388 containerd[1482]: 2026-04-21 09:59:08.481 [INFO][5051] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2b01612c1a7db10b551de185e9572d9dbb3b74d5137632bbf086b30b62019747" Namespace="calico-system" Pod="calico-apiserver-b79cb9f78-7tssf" WorkloadEndpoint="ci--4081--3--7--d--6a70a4c656-k8s-calico--apiserver--b79cb9f78--7tssf-eth0" Apr 21 09:59:08.636388 containerd[1482]: 2026-04-21 09:59:08.526 [INFO][5063] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2b01612c1a7db10b551de185e9572d9dbb3b74d5137632bbf086b30b62019747" HandleID="k8s-pod-network.2b01612c1a7db10b551de185e9572d9dbb3b74d5137632bbf086b30b62019747" Workload="ci--4081--3--7--d--6a70a4c656-k8s-calico--apiserver--b79cb9f78--7tssf-eth0" Apr 21 09:59:08.636388 containerd[1482]: 2026-04-21 09:59:08.541 [INFO][5063] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="2b01612c1a7db10b551de185e9572d9dbb3b74d5137632bbf086b30b62019747" HandleID="k8s-pod-network.2b01612c1a7db10b551de185e9572d9dbb3b74d5137632bbf086b30b62019747" Workload="ci--4081--3--7--d--6a70a4c656-k8s-calico--apiserver--b79cb9f78--7tssf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002fbe60), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-7-d-6a70a4c656", "pod":"calico-apiserver-b79cb9f78-7tssf", "timestamp":"2026-04-21 09:59:08.526990877 +0000 UTC"}, Hostname:"ci-4081-3-7-d-6a70a4c656", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40002e6dc0)} Apr 21 09:59:08.636388 containerd[1482]: 2026-04-21 09:59:08.541 [INFO][5063] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 09:59:08.636388 containerd[1482]: 2026-04-21 09:59:08.542 [INFO][5063] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 09:59:08.636388 containerd[1482]: 2026-04-21 09:59:08.542 [INFO][5063] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-7-d-6a70a4c656' Apr 21 09:59:08.636388 containerd[1482]: 2026-04-21 09:59:08.546 [INFO][5063] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.2b01612c1a7db10b551de185e9572d9dbb3b74d5137632bbf086b30b62019747" host="ci-4081-3-7-d-6a70a4c656" Apr 21 09:59:08.636388 containerd[1482]: 2026-04-21 09:59:08.557 [INFO][5063] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081-3-7-d-6a70a4c656" Apr 21 09:59:08.636388 containerd[1482]: 2026-04-21 09:59:08.565 [INFO][5063] ipam/ipam.go 526: Trying affinity for 192.168.22.192/26 host="ci-4081-3-7-d-6a70a4c656" Apr 21 09:59:08.636388 containerd[1482]: 2026-04-21 09:59:08.569 [INFO][5063] ipam/ipam.go 160: Attempting to load block cidr=192.168.22.192/26 host="ci-4081-3-7-d-6a70a4c656" Apr 21 09:59:08.636388 containerd[1482]: 2026-04-21 09:59:08.573 [INFO][5063] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.22.192/26 host="ci-4081-3-7-d-6a70a4c656" Apr 21 09:59:08.636388 containerd[1482]: 2026-04-21 09:59:08.573 [INFO][5063] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.22.192/26 handle="k8s-pod-network.2b01612c1a7db10b551de185e9572d9dbb3b74d5137632bbf086b30b62019747" host="ci-4081-3-7-d-6a70a4c656" Apr 21 09:59:08.636388 containerd[1482]: 2026-04-21 09:59:08.576 [INFO][5063] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.2b01612c1a7db10b551de185e9572d9dbb3b74d5137632bbf086b30b62019747 Apr 21 09:59:08.636388 containerd[1482]: 2026-04-21 09:59:08.586 [INFO][5063] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.22.192/26 handle="k8s-pod-network.2b01612c1a7db10b551de185e9572d9dbb3b74d5137632bbf086b30b62019747" host="ci-4081-3-7-d-6a70a4c656" Apr 21 09:59:08.636388 containerd[1482]: 2026-04-21 09:59:08.594 [INFO][5063] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.22.200/26] block=192.168.22.192/26 handle="k8s-pod-network.2b01612c1a7db10b551de185e9572d9dbb3b74d5137632bbf086b30b62019747" host="ci-4081-3-7-d-6a70a4c656" Apr 21 09:59:08.636388 containerd[1482]: 2026-04-21 09:59:08.595 [INFO][5063] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.22.200/26] handle="k8s-pod-network.2b01612c1a7db10b551de185e9572d9dbb3b74d5137632bbf086b30b62019747" host="ci-4081-3-7-d-6a70a4c656" Apr 21 09:59:08.636388 containerd[1482]: 2026-04-21 09:59:08.595 [INFO][5063] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 09:59:08.636388 containerd[1482]: 2026-04-21 09:59:08.595 [INFO][5063] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.22.200/26] IPv6=[] ContainerID="2b01612c1a7db10b551de185e9572d9dbb3b74d5137632bbf086b30b62019747" HandleID="k8s-pod-network.2b01612c1a7db10b551de185e9572d9dbb3b74d5137632bbf086b30b62019747" Workload="ci--4081--3--7--d--6a70a4c656-k8s-calico--apiserver--b79cb9f78--7tssf-eth0" Apr 21 09:59:08.637151 containerd[1482]: 2026-04-21 09:59:08.599 [INFO][5051] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2b01612c1a7db10b551de185e9572d9dbb3b74d5137632bbf086b30b62019747" Namespace="calico-system" Pod="calico-apiserver-b79cb9f78-7tssf" WorkloadEndpoint="ci--4081--3--7--d--6a70a4c656-k8s-calico--apiserver--b79cb9f78--7tssf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--d--6a70a4c656-k8s-calico--apiserver--b79cb9f78--7tssf-eth0", GenerateName:"calico-apiserver-b79cb9f78-", Namespace:"calico-system", SelfLink:"", UID:"6c6d8cb6-7cee-40f4-a210-b9a1efc80ea9", ResourceVersion:"1046", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 9, 58, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"b79cb9f78", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-d-6a70a4c656", ContainerID:"", Pod:"calico-apiserver-b79cb9f78-7tssf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.22.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"califa3ebdf6e83", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 09:59:08.637151 containerd[1482]: 2026-04-21 09:59:08.599 [INFO][5051] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.22.200/32] ContainerID="2b01612c1a7db10b551de185e9572d9dbb3b74d5137632bbf086b30b62019747" Namespace="calico-system" Pod="calico-apiserver-b79cb9f78-7tssf" WorkloadEndpoint="ci--4081--3--7--d--6a70a4c656-k8s-calico--apiserver--b79cb9f78--7tssf-eth0" Apr 21 09:59:08.637151 containerd[1482]: 2026-04-21 09:59:08.599 [INFO][5051] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califa3ebdf6e83 ContainerID="2b01612c1a7db10b551de185e9572d9dbb3b74d5137632bbf086b30b62019747" Namespace="calico-system" Pod="calico-apiserver-b79cb9f78-7tssf" WorkloadEndpoint="ci--4081--3--7--d--6a70a4c656-k8s-calico--apiserver--b79cb9f78--7tssf-eth0" Apr 21 09:59:08.637151 containerd[1482]: 2026-04-21 09:59:08.613 [INFO][5051] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2b01612c1a7db10b551de185e9572d9dbb3b74d5137632bbf086b30b62019747" Namespace="calico-system" Pod="calico-apiserver-b79cb9f78-7tssf" WorkloadEndpoint="ci--4081--3--7--d--6a70a4c656-k8s-calico--apiserver--b79cb9f78--7tssf-eth0" Apr 21 09:59:08.637151 containerd[1482]: 2026-04-21 09:59:08.614 [INFO][5051] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2b01612c1a7db10b551de185e9572d9dbb3b74d5137632bbf086b30b62019747" Namespace="calico-system" Pod="calico-apiserver-b79cb9f78-7tssf" WorkloadEndpoint="ci--4081--3--7--d--6a70a4c656-k8s-calico--apiserver--b79cb9f78--7tssf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--d--6a70a4c656-k8s-calico--apiserver--b79cb9f78--7tssf-eth0", GenerateName:"calico-apiserver-b79cb9f78-", Namespace:"calico-system", SelfLink:"", UID:"6c6d8cb6-7cee-40f4-a210-b9a1efc80ea9", ResourceVersion:"1046", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 9, 58, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"b79cb9f78", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-d-6a70a4c656", ContainerID:"2b01612c1a7db10b551de185e9572d9dbb3b74d5137632bbf086b30b62019747", Pod:"calico-apiserver-b79cb9f78-7tssf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.22.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"califa3ebdf6e83", MAC:"02:39:aa:19:ef:1c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 09:59:08.637151 containerd[1482]: 2026-04-21 09:59:08.630 [INFO][5051] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2b01612c1a7db10b551de185e9572d9dbb3b74d5137632bbf086b30b62019747" Namespace="calico-system" Pod="calico-apiserver-b79cb9f78-7tssf" WorkloadEndpoint="ci--4081--3--7--d--6a70a4c656-k8s-calico--apiserver--b79cb9f78--7tssf-eth0" Apr 21 09:59:08.663948 containerd[1482]: time="2026-04-21T09:59:08.663056893Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 09:59:08.663948 containerd[1482]: time="2026-04-21T09:59:08.663716965Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 09:59:08.663948 containerd[1482]: time="2026-04-21T09:59:08.663790506Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 09:59:08.665590 containerd[1482]: time="2026-04-21T09:59:08.665487599Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 09:59:08.702160 systemd[1]: Started cri-containerd-2b01612c1a7db10b551de185e9572d9dbb3b74d5137632bbf086b30b62019747.scope - libcontainer container 2b01612c1a7db10b551de185e9572d9dbb3b74d5137632bbf086b30b62019747. Apr 21 09:59:08.763874 containerd[1482]: time="2026-04-21T09:59:08.763702227Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-b79cb9f78-7tssf,Uid:6c6d8cb6-7cee-40f4-a210-b9a1efc80ea9,Namespace:calico-system,Attempt:1,} returns sandbox id \"2b01612c1a7db10b551de185e9572d9dbb3b74d5137632bbf086b30b62019747\"" Apr 21 09:59:08.770449 containerd[1482]: time="2026-04-21T09:59:08.770400291Z" level=info msg="CreateContainer within sandbox \"2b01612c1a7db10b551de185e9572d9dbb3b74d5137632bbf086b30b62019747\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 21 09:59:08.790336 containerd[1482]: time="2026-04-21T09:59:08.790296387Z" level=info msg="CreateContainer within sandbox \"2b01612c1a7db10b551de185e9572d9dbb3b74d5137632bbf086b30b62019747\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"eaaf552bbe62b0e40f29544665c53100453de6cf151cf481d83c13b7be860def\"" Apr 21 09:59:08.793413 containerd[1482]: time="2026-04-21T09:59:08.791283873Z" level=info msg="StartContainer for \"eaaf552bbe62b0e40f29544665c53100453de6cf151cf481d83c13b7be860def\"" Apr 21 09:59:08.831028 systemd[1]: Started cri-containerd-eaaf552bbe62b0e40f29544665c53100453de6cf151cf481d83c13b7be860def.scope - libcontainer container eaaf552bbe62b0e40f29544665c53100453de6cf151cf481d83c13b7be860def. Apr 21 09:59:08.900850 containerd[1482]: time="2026-04-21T09:59:08.900733403Z" level=info msg="StartContainer for \"eaaf552bbe62b0e40f29544665c53100453de6cf151cf481d83c13b7be860def\" returns successfully" Apr 21 09:59:08.950026 systemd-networkd[1366]: cali49d3852e823: Gained IPv6LL Apr 21 09:59:09.566831 kubelet[2514]: I0421 09:59:09.566702 2514 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 21 09:59:09.910285 systemd-networkd[1366]: califa3ebdf6e83: Gained IPv6LL Apr 21 09:59:10.189331 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4245069794.mount: Deactivated successfully. Apr 21 09:59:10.569867 kubelet[2514]: I0421 09:59:10.569144 2514 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 21 09:59:10.751157 containerd[1482]: time="2026-04-21T09:59:10.751106482Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 09:59:10.752799 containerd[1482]: time="2026-04-21T09:59:10.752731748Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.31.4: active requests=0, bytes read=51613980" Apr 21 09:59:10.754831 containerd[1482]: time="2026-04-21T09:59:10.753743414Z" level=info msg="ImageCreate event name:\"sha256:5274e98e9b12badfa0d6f106814630212e6de7abb8deaf896423b13e6ebdb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 09:59:10.757201 containerd[1482]: time="2026-04-21T09:59:10.756662099Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 09:59:10.759837 containerd[1482]: time="2026-04-21T09:59:10.758370107Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" with image id \"sha256:5274e98e9b12badfa0d6f106814630212e6de7abb8deaf896423b13e6ebdb41b\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\", size \"51613826\" in 2.920500174s" Apr 21 09:59:10.767226 containerd[1482]: time="2026-04-21T09:59:10.758407717Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" returns image reference \"sha256:5274e98e9b12badfa0d6f106814630212e6de7abb8deaf896423b13e6ebdb41b\"" Apr 21 09:59:10.769913 containerd[1482]: time="2026-04-21T09:59:10.769203468Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\"" Apr 21 09:59:10.773128 containerd[1482]: time="2026-04-21T09:59:10.772665176Z" level=info msg="CreateContainer within sandbox \"b4500745d7a09a8dca5eba4a7392fc75689cce9e76ab197aa0f4b7393f18cd2d\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Apr 21 09:59:10.796941 containerd[1482]: time="2026-04-21T09:59:10.796875765Z" level=info msg="CreateContainer within sandbox \"b4500745d7a09a8dca5eba4a7392fc75689cce9e76ab197aa0f4b7393f18cd2d\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"d40ce737bcdac9ccbb57e9d2490e708695fe61d8ecc705398d8bc31d5cce7290\"" Apr 21 09:59:10.800001 containerd[1482]: time="2026-04-21T09:59:10.798352472Z" level=info msg="StartContainer for \"d40ce737bcdac9ccbb57e9d2490e708695fe61d8ecc705398d8bc31d5cce7290\"" Apr 21 09:59:10.853764 systemd[1]: Started sshd@7-178.104.221.144:22-50.85.169.122:51884.service - OpenSSH per-connection server daemon (50.85.169.122:51884). Apr 21 09:59:10.939006 systemd[1]: Started cri-containerd-d40ce737bcdac9ccbb57e9d2490e708695fe61d8ecc705398d8bc31d5cce7290.scope - libcontainer container d40ce737bcdac9ccbb57e9d2490e708695fe61d8ecc705398d8bc31d5cce7290. Apr 21 09:59:11.004988 sshd[5203]: Accepted publickey for core from 50.85.169.122 port 51884 ssh2: RSA SHA256:H2GDHYMb+1VDhh8fYRULGIeGI6zEpuvWNbrKKWv7l+g Apr 21 09:59:11.009730 sshd[5203]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 09:59:11.017268 systemd-logind[1452]: New session 8 of user core. Apr 21 09:59:11.022073 systemd[1]: Started session-8.scope - Session 8 of User core. Apr 21 09:59:11.027933 containerd[1482]: time="2026-04-21T09:59:11.027515551Z" level=info msg="StartContainer for \"d40ce737bcdac9ccbb57e9d2490e708695fe61d8ecc705398d8bc31d5cce7290\" returns successfully" Apr 21 09:59:11.293036 sshd[5203]: pam_unix(sshd:session): session closed for user core Apr 21 09:59:11.301126 systemd[1]: sshd@7-178.104.221.144:22-50.85.169.122:51884.service: Deactivated successfully. Apr 21 09:59:11.304453 systemd[1]: session-8.scope: Deactivated successfully. Apr 21 09:59:11.307389 systemd-logind[1452]: Session 8 logged out. Waiting for processes to exit. Apr 21 09:59:11.310655 systemd-logind[1452]: Removed session 8. Apr 21 09:59:11.596784 kubelet[2514]: I0421 09:59:11.596017 2514 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-b79cb9f78-7tssf" podStartSLOduration=35.596000532 podStartE2EDuration="35.596000532s" podCreationTimestamp="2026-04-21 09:58:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-21 09:59:09.596324355 +0000 UTC m=+54.470074236" watchObservedRunningTime="2026-04-21 09:59:11.596000532 +0000 UTC m=+56.469750413" Apr 21 09:59:13.419851 containerd[1482]: time="2026-04-21T09:59:13.416724371Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 09:59:13.421441 containerd[1482]: time="2026-04-21T09:59:13.421406271Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.31.4: active requests=0, bytes read=49189955" Apr 21 09:59:13.422927 containerd[1482]: time="2026-04-21T09:59:13.422892928Z" level=info msg="ImageCreate event name:\"sha256:e80fe1ce4f06b0791c077492cd9d5ebf00125a02bbafdcd04d2a64e10cc4ad95\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 09:59:13.426794 containerd[1482]: time="2026-04-21T09:59:13.426741759Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 09:59:13.428239 containerd[1482]: time="2026-04-21T09:59:13.428204570Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" with image id \"sha256:e80fe1ce4f06b0791c077492cd9d5ebf00125a02bbafdcd04d2a64e10cc4ad95\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\", size \"50587448\" in 2.658691862s" Apr 21 09:59:13.428370 containerd[1482]: time="2026-04-21T09:59:13.428352204Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" returns image reference \"sha256:e80fe1ce4f06b0791c077492cd9d5ebf00125a02bbafdcd04d2a64e10cc4ad95\"" Apr 21 09:59:13.450722 containerd[1482]: time="2026-04-21T09:59:13.450682301Z" level=info msg="CreateContainer within sandbox \"19100130d0bc745e092d447c5e0bff3bc54fb00a412dec54e7a25f310624f358\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Apr 21 09:59:13.464451 containerd[1482]: time="2026-04-21T09:59:13.464176596Z" level=info msg="CreateContainer within sandbox \"19100130d0bc745e092d447c5e0bff3bc54fb00a412dec54e7a25f310624f358\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"8bf77ee093b841f4133d213b70b868cdc6aaa774040a1782682cabbc2a2626e9\"" Apr 21 09:59:13.466018 containerd[1482]: time="2026-04-21T09:59:13.465985526Z" level=info msg="StartContainer for \"8bf77ee093b841f4133d213b70b868cdc6aaa774040a1782682cabbc2a2626e9\"" Apr 21 09:59:13.504129 systemd[1]: Started cri-containerd-8bf77ee093b841f4133d213b70b868cdc6aaa774040a1782682cabbc2a2626e9.scope - libcontainer container 8bf77ee093b841f4133d213b70b868cdc6aaa774040a1782682cabbc2a2626e9. Apr 21 09:59:13.570745 containerd[1482]: time="2026-04-21T09:59:13.570691117Z" level=info msg="StartContainer for \"8bf77ee093b841f4133d213b70b868cdc6aaa774040a1782682cabbc2a2626e9\" returns successfully" Apr 21 09:59:13.615846 kubelet[2514]: I0421 09:59:13.615748 2514 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-cfd49cd7d-twdnt" podStartSLOduration=30.972801413 podStartE2EDuration="36.615729916s" podCreationTimestamp="2026-04-21 09:58:37 +0000 UTC" firstStartedPulling="2026-04-21 09:59:07.788452707 +0000 UTC m=+52.662202588" lastFinishedPulling="2026-04-21 09:59:13.43138121 +0000 UTC m=+58.305131091" observedRunningTime="2026-04-21 09:59:13.611802547 +0000 UTC m=+58.485552468" watchObservedRunningTime="2026-04-21 09:59:13.615729916 +0000 UTC m=+58.489479797" Apr 21 09:59:13.618203 kubelet[2514]: I0421 09:59:13.616052 2514 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-5b85766d88-hfxp8" podStartSLOduration=32.612823535 podStartE2EDuration="37.616043707s" podCreationTimestamp="2026-04-21 09:58:36 +0000 UTC" firstStartedPulling="2026-04-21 09:59:05.765262867 +0000 UTC m=+50.639012748" lastFinishedPulling="2026-04-21 09:59:10.768483039 +0000 UTC m=+55.642232920" observedRunningTime="2026-04-21 09:59:11.59827722 +0000 UTC m=+56.472027101" watchObservedRunningTime="2026-04-21 09:59:13.616043707 +0000 UTC m=+58.489793548" Apr 21 09:59:14.625866 systemd[1]: run-containerd-runc-k8s.io-8bf77ee093b841f4133d213b70b868cdc6aaa774040a1782682cabbc2a2626e9-runc.dLwVVd.mount: Deactivated successfully. Apr 21 09:59:15.241668 containerd[1482]: time="2026-04-21T09:59:15.241353810Z" level=info msg="StopPodSandbox for \"b118c496459eb514921cd9a74eb129d901e646ba194b375b30ab9fd23c588678\"" Apr 21 09:59:15.347883 containerd[1482]: 2026-04-21 09:59:15.289 [WARNING][5411] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b118c496459eb514921cd9a74eb129d901e646ba194b375b30ab9fd23c588678" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--d--6a70a4c656-k8s-calico--apiserver--b79cb9f78--7tssf-eth0", GenerateName:"calico-apiserver-b79cb9f78-", Namespace:"calico-system", SelfLink:"", UID:"6c6d8cb6-7cee-40f4-a210-b9a1efc80ea9", ResourceVersion:"1062", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 9, 58, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"b79cb9f78", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-d-6a70a4c656", ContainerID:"2b01612c1a7db10b551de185e9572d9dbb3b74d5137632bbf086b30b62019747", Pod:"calico-apiserver-b79cb9f78-7tssf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.22.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"califa3ebdf6e83", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 09:59:15.347883 containerd[1482]: 2026-04-21 09:59:15.290 [INFO][5411] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="b118c496459eb514921cd9a74eb129d901e646ba194b375b30ab9fd23c588678" Apr 21 09:59:15.347883 containerd[1482]: 2026-04-21 09:59:15.290 [INFO][5411] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b118c496459eb514921cd9a74eb129d901e646ba194b375b30ab9fd23c588678" iface="eth0" netns="" Apr 21 09:59:15.347883 containerd[1482]: 2026-04-21 09:59:15.290 [INFO][5411] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="b118c496459eb514921cd9a74eb129d901e646ba194b375b30ab9fd23c588678" Apr 21 09:59:15.347883 containerd[1482]: 2026-04-21 09:59:15.290 [INFO][5411] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="b118c496459eb514921cd9a74eb129d901e646ba194b375b30ab9fd23c588678" Apr 21 09:59:15.347883 containerd[1482]: 2026-04-21 09:59:15.327 [INFO][5418] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="b118c496459eb514921cd9a74eb129d901e646ba194b375b30ab9fd23c588678" HandleID="k8s-pod-network.b118c496459eb514921cd9a74eb129d901e646ba194b375b30ab9fd23c588678" Workload="ci--4081--3--7--d--6a70a4c656-k8s-calico--apiserver--b79cb9f78--7tssf-eth0" Apr 21 09:59:15.347883 containerd[1482]: 2026-04-21 09:59:15.327 [INFO][5418] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 09:59:15.347883 containerd[1482]: 2026-04-21 09:59:15.327 [INFO][5418] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 09:59:15.347883 containerd[1482]: 2026-04-21 09:59:15.340 [WARNING][5418] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="b118c496459eb514921cd9a74eb129d901e646ba194b375b30ab9fd23c588678" HandleID="k8s-pod-network.b118c496459eb514921cd9a74eb129d901e646ba194b375b30ab9fd23c588678" Workload="ci--4081--3--7--d--6a70a4c656-k8s-calico--apiserver--b79cb9f78--7tssf-eth0" Apr 21 09:59:15.347883 containerd[1482]: 2026-04-21 09:59:15.340 [INFO][5418] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="b118c496459eb514921cd9a74eb129d901e646ba194b375b30ab9fd23c588678" HandleID="k8s-pod-network.b118c496459eb514921cd9a74eb129d901e646ba194b375b30ab9fd23c588678" Workload="ci--4081--3--7--d--6a70a4c656-k8s-calico--apiserver--b79cb9f78--7tssf-eth0" Apr 21 09:59:15.347883 containerd[1482]: 2026-04-21 09:59:15.342 [INFO][5418] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 09:59:15.347883 containerd[1482]: 2026-04-21 09:59:15.345 [INFO][5411] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="b118c496459eb514921cd9a74eb129d901e646ba194b375b30ab9fd23c588678" Apr 21 09:59:15.348940 containerd[1482]: time="2026-04-21T09:59:15.347987994Z" level=info msg="TearDown network for sandbox \"b118c496459eb514921cd9a74eb129d901e646ba194b375b30ab9fd23c588678\" successfully" Apr 21 09:59:15.348940 containerd[1482]: time="2026-04-21T09:59:15.348023561Z" level=info msg="StopPodSandbox for \"b118c496459eb514921cd9a74eb129d901e646ba194b375b30ab9fd23c588678\" returns successfully" Apr 21 09:59:15.348940 containerd[1482]: time="2026-04-21T09:59:15.348800401Z" level=info msg="RemovePodSandbox for \"b118c496459eb514921cd9a74eb129d901e646ba194b375b30ab9fd23c588678\"" Apr 21 09:59:15.350982 containerd[1482]: time="2026-04-21T09:59:15.350936082Z" level=info msg="Forcibly stopping sandbox \"b118c496459eb514921cd9a74eb129d901e646ba194b375b30ab9fd23c588678\"" Apr 21 09:59:15.441892 containerd[1482]: 2026-04-21 09:59:15.395 [WARNING][5432] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b118c496459eb514921cd9a74eb129d901e646ba194b375b30ab9fd23c588678" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--d--6a70a4c656-k8s-calico--apiserver--b79cb9f78--7tssf-eth0", GenerateName:"calico-apiserver-b79cb9f78-", Namespace:"calico-system", SelfLink:"", UID:"6c6d8cb6-7cee-40f4-a210-b9a1efc80ea9", ResourceVersion:"1062", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 9, 58, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"b79cb9f78", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-d-6a70a4c656", ContainerID:"2b01612c1a7db10b551de185e9572d9dbb3b74d5137632bbf086b30b62019747", Pod:"calico-apiserver-b79cb9f78-7tssf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.22.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"califa3ebdf6e83", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 09:59:15.441892 containerd[1482]: 2026-04-21 09:59:15.396 [INFO][5432] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="b118c496459eb514921cd9a74eb129d901e646ba194b375b30ab9fd23c588678" Apr 21 09:59:15.441892 containerd[1482]: 2026-04-21 09:59:15.396 [INFO][5432] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b118c496459eb514921cd9a74eb129d901e646ba194b375b30ab9fd23c588678" iface="eth0" netns="" Apr 21 09:59:15.441892 containerd[1482]: 2026-04-21 09:59:15.396 [INFO][5432] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="b118c496459eb514921cd9a74eb129d901e646ba194b375b30ab9fd23c588678" Apr 21 09:59:15.441892 containerd[1482]: 2026-04-21 09:59:15.396 [INFO][5432] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="b118c496459eb514921cd9a74eb129d901e646ba194b375b30ab9fd23c588678" Apr 21 09:59:15.441892 containerd[1482]: 2026-04-21 09:59:15.422 [INFO][5439] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="b118c496459eb514921cd9a74eb129d901e646ba194b375b30ab9fd23c588678" HandleID="k8s-pod-network.b118c496459eb514921cd9a74eb129d901e646ba194b375b30ab9fd23c588678" Workload="ci--4081--3--7--d--6a70a4c656-k8s-calico--apiserver--b79cb9f78--7tssf-eth0" Apr 21 09:59:15.441892 containerd[1482]: 2026-04-21 09:59:15.422 [INFO][5439] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 09:59:15.441892 containerd[1482]: 2026-04-21 09:59:15.422 [INFO][5439] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 09:59:15.441892 containerd[1482]: 2026-04-21 09:59:15.434 [WARNING][5439] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="b118c496459eb514921cd9a74eb129d901e646ba194b375b30ab9fd23c588678" HandleID="k8s-pod-network.b118c496459eb514921cd9a74eb129d901e646ba194b375b30ab9fd23c588678" Workload="ci--4081--3--7--d--6a70a4c656-k8s-calico--apiserver--b79cb9f78--7tssf-eth0" Apr 21 09:59:15.441892 containerd[1482]: 2026-04-21 09:59:15.434 [INFO][5439] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="b118c496459eb514921cd9a74eb129d901e646ba194b375b30ab9fd23c588678" HandleID="k8s-pod-network.b118c496459eb514921cd9a74eb129d901e646ba194b375b30ab9fd23c588678" Workload="ci--4081--3--7--d--6a70a4c656-k8s-calico--apiserver--b79cb9f78--7tssf-eth0" Apr 21 09:59:15.441892 containerd[1482]: 2026-04-21 09:59:15.437 [INFO][5439] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 09:59:15.441892 containerd[1482]: 2026-04-21 09:59:15.439 [INFO][5432] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="b118c496459eb514921cd9a74eb129d901e646ba194b375b30ab9fd23c588678" Apr 21 09:59:15.441892 containerd[1482]: time="2026-04-21T09:59:15.441476588Z" level=info msg="TearDown network for sandbox \"b118c496459eb514921cd9a74eb129d901e646ba194b375b30ab9fd23c588678\" successfully" Apr 21 09:59:15.449078 containerd[1482]: time="2026-04-21T09:59:15.449028305Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b118c496459eb514921cd9a74eb129d901e646ba194b375b30ab9fd23c588678\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 21 09:59:15.449209 containerd[1482]: time="2026-04-21T09:59:15.449126245Z" level=info msg="RemovePodSandbox \"b118c496459eb514921cd9a74eb129d901e646ba194b375b30ab9fd23c588678\" returns successfully" Apr 21 09:59:15.449770 containerd[1482]: time="2026-04-21T09:59:15.449736050Z" level=info msg="StopPodSandbox for \"f53752a87b59767cedc982a405419339c8d99a7a047448cafe949623a67b2fa5\"" Apr 21 09:59:15.550462 containerd[1482]: 2026-04-21 09:59:15.494 [WARNING][5454] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="f53752a87b59767cedc982a405419339c8d99a7a047448cafe949623a67b2fa5" WorkloadEndpoint="ci--4081--3--7--d--6a70a4c656-k8s-whisker--98d5c77f5--775kb-eth0" Apr 21 09:59:15.550462 containerd[1482]: 2026-04-21 09:59:15.495 [INFO][5454] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="f53752a87b59767cedc982a405419339c8d99a7a047448cafe949623a67b2fa5" Apr 21 09:59:15.550462 containerd[1482]: 2026-04-21 09:59:15.495 [INFO][5454] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f53752a87b59767cedc982a405419339c8d99a7a047448cafe949623a67b2fa5" iface="eth0" netns="" Apr 21 09:59:15.550462 containerd[1482]: 2026-04-21 09:59:15.495 [INFO][5454] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="f53752a87b59767cedc982a405419339c8d99a7a047448cafe949623a67b2fa5" Apr 21 09:59:15.550462 containerd[1482]: 2026-04-21 09:59:15.495 [INFO][5454] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="f53752a87b59767cedc982a405419339c8d99a7a047448cafe949623a67b2fa5" Apr 21 09:59:15.550462 containerd[1482]: 2026-04-21 09:59:15.530 [INFO][5463] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="f53752a87b59767cedc982a405419339c8d99a7a047448cafe949623a67b2fa5" HandleID="k8s-pod-network.f53752a87b59767cedc982a405419339c8d99a7a047448cafe949623a67b2fa5" Workload="ci--4081--3--7--d--6a70a4c656-k8s-whisker--98d5c77f5--775kb-eth0" Apr 21 09:59:15.550462 containerd[1482]: 2026-04-21 09:59:15.531 [INFO][5463] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 09:59:15.550462 containerd[1482]: 2026-04-21 09:59:15.531 [INFO][5463] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 09:59:15.550462 containerd[1482]: 2026-04-21 09:59:15.543 [WARNING][5463] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="f53752a87b59767cedc982a405419339c8d99a7a047448cafe949623a67b2fa5" HandleID="k8s-pod-network.f53752a87b59767cedc982a405419339c8d99a7a047448cafe949623a67b2fa5" Workload="ci--4081--3--7--d--6a70a4c656-k8s-whisker--98d5c77f5--775kb-eth0" Apr 21 09:59:15.550462 containerd[1482]: 2026-04-21 09:59:15.543 [INFO][5463] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="f53752a87b59767cedc982a405419339c8d99a7a047448cafe949623a67b2fa5" HandleID="k8s-pod-network.f53752a87b59767cedc982a405419339c8d99a7a047448cafe949623a67b2fa5" Workload="ci--4081--3--7--d--6a70a4c656-k8s-whisker--98d5c77f5--775kb-eth0" Apr 21 09:59:15.550462 containerd[1482]: 2026-04-21 09:59:15.546 [INFO][5463] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 09:59:15.550462 containerd[1482]: 2026-04-21 09:59:15.548 [INFO][5454] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="f53752a87b59767cedc982a405419339c8d99a7a047448cafe949623a67b2fa5" Apr 21 09:59:15.550462 containerd[1482]: time="2026-04-21T09:59:15.550014044Z" level=info msg="TearDown network for sandbox \"f53752a87b59767cedc982a405419339c8d99a7a047448cafe949623a67b2fa5\" successfully" Apr 21 09:59:15.550462 containerd[1482]: time="2026-04-21T09:59:15.550043570Z" level=info msg="StopPodSandbox for \"f53752a87b59767cedc982a405419339c8d99a7a047448cafe949623a67b2fa5\" returns successfully" Apr 21 09:59:15.551007 containerd[1482]: time="2026-04-21T09:59:15.550567998Z" level=info msg="RemovePodSandbox for \"f53752a87b59767cedc982a405419339c8d99a7a047448cafe949623a67b2fa5\"" Apr 21 09:59:15.551007 containerd[1482]: time="2026-04-21T09:59:15.550610367Z" level=info msg="Forcibly stopping sandbox \"f53752a87b59767cedc982a405419339c8d99a7a047448cafe949623a67b2fa5\"" Apr 21 09:59:15.647325 containerd[1482]: 2026-04-21 09:59:15.596 [WARNING][5478] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="f53752a87b59767cedc982a405419339c8d99a7a047448cafe949623a67b2fa5" WorkloadEndpoint="ci--4081--3--7--d--6a70a4c656-k8s-whisker--98d5c77f5--775kb-eth0" Apr 21 09:59:15.647325 containerd[1482]: 2026-04-21 09:59:15.597 [INFO][5478] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="f53752a87b59767cedc982a405419339c8d99a7a047448cafe949623a67b2fa5" Apr 21 09:59:15.647325 containerd[1482]: 2026-04-21 09:59:15.597 [INFO][5478] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f53752a87b59767cedc982a405419339c8d99a7a047448cafe949623a67b2fa5" iface="eth0" netns="" Apr 21 09:59:15.647325 containerd[1482]: 2026-04-21 09:59:15.597 [INFO][5478] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="f53752a87b59767cedc982a405419339c8d99a7a047448cafe949623a67b2fa5" Apr 21 09:59:15.647325 containerd[1482]: 2026-04-21 09:59:15.597 [INFO][5478] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="f53752a87b59767cedc982a405419339c8d99a7a047448cafe949623a67b2fa5" Apr 21 09:59:15.647325 containerd[1482]: 2026-04-21 09:59:15.627 [INFO][5485] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="f53752a87b59767cedc982a405419339c8d99a7a047448cafe949623a67b2fa5" HandleID="k8s-pod-network.f53752a87b59767cedc982a405419339c8d99a7a047448cafe949623a67b2fa5" Workload="ci--4081--3--7--d--6a70a4c656-k8s-whisker--98d5c77f5--775kb-eth0" Apr 21 09:59:15.647325 containerd[1482]: 2026-04-21 09:59:15.628 [INFO][5485] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 09:59:15.647325 containerd[1482]: 2026-04-21 09:59:15.628 [INFO][5485] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 09:59:15.647325 containerd[1482]: 2026-04-21 09:59:15.638 [WARNING][5485] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="f53752a87b59767cedc982a405419339c8d99a7a047448cafe949623a67b2fa5" HandleID="k8s-pod-network.f53752a87b59767cedc982a405419339c8d99a7a047448cafe949623a67b2fa5" Workload="ci--4081--3--7--d--6a70a4c656-k8s-whisker--98d5c77f5--775kb-eth0" Apr 21 09:59:15.647325 containerd[1482]: 2026-04-21 09:59:15.638 [INFO][5485] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="f53752a87b59767cedc982a405419339c8d99a7a047448cafe949623a67b2fa5" HandleID="k8s-pod-network.f53752a87b59767cedc982a405419339c8d99a7a047448cafe949623a67b2fa5" Workload="ci--4081--3--7--d--6a70a4c656-k8s-whisker--98d5c77f5--775kb-eth0" Apr 21 09:59:15.647325 containerd[1482]: 2026-04-21 09:59:15.641 [INFO][5485] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 09:59:15.647325 containerd[1482]: 2026-04-21 09:59:15.644 [INFO][5478] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="f53752a87b59767cedc982a405419339c8d99a7a047448cafe949623a67b2fa5" Apr 21 09:59:15.647959 containerd[1482]: time="2026-04-21T09:59:15.647358713Z" level=info msg="TearDown network for sandbox \"f53752a87b59767cedc982a405419339c8d99a7a047448cafe949623a67b2fa5\" successfully" Apr 21 09:59:15.655944 containerd[1482]: time="2026-04-21T09:59:15.655864706Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f53752a87b59767cedc982a405419339c8d99a7a047448cafe949623a67b2fa5\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 21 09:59:15.655944 containerd[1482]: time="2026-04-21T09:59:15.655943682Z" level=info msg="RemovePodSandbox \"f53752a87b59767cedc982a405419339c8d99a7a047448cafe949623a67b2fa5\" returns successfully" Apr 21 09:59:15.657188 containerd[1482]: time="2026-04-21T09:59:15.656678754Z" level=info msg="StopPodSandbox for \"d50ced9d536f67c6cec3a4c3bf4e9e3fbacbdfe0dc7f764a964d798a44792183\"" Apr 21 09:59:15.739791 containerd[1482]: 2026-04-21 09:59:15.697 [WARNING][5499] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d50ced9d536f67c6cec3a4c3bf4e9e3fbacbdfe0dc7f764a964d798a44792183" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--d--6a70a4c656-k8s-calico--kube--controllers--cfd49cd7d--twdnt-eth0", GenerateName:"calico-kube-controllers-cfd49cd7d-", Namespace:"calico-system", SelfLink:"", UID:"076dd66b-5522-4ecd-a955-84e896c699d3", ResourceVersion:"1138", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 9, 58, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"cfd49cd7d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-d-6a70a4c656", ContainerID:"19100130d0bc745e092d447c5e0bff3bc54fb00a412dec54e7a25f310624f358", Pod:"calico-kube-controllers-cfd49cd7d-twdnt", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.22.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali49d3852e823", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 09:59:15.739791 containerd[1482]: 2026-04-21 09:59:15.697 [INFO][5499] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="d50ced9d536f67c6cec3a4c3bf4e9e3fbacbdfe0dc7f764a964d798a44792183" Apr 21 09:59:15.739791 containerd[1482]: 2026-04-21 09:59:15.697 [INFO][5499] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d50ced9d536f67c6cec3a4c3bf4e9e3fbacbdfe0dc7f764a964d798a44792183" iface="eth0" netns="" Apr 21 09:59:15.739791 containerd[1482]: 2026-04-21 09:59:15.697 [INFO][5499] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="d50ced9d536f67c6cec3a4c3bf4e9e3fbacbdfe0dc7f764a964d798a44792183" Apr 21 09:59:15.739791 containerd[1482]: 2026-04-21 09:59:15.697 [INFO][5499] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="d50ced9d536f67c6cec3a4c3bf4e9e3fbacbdfe0dc7f764a964d798a44792183" Apr 21 09:59:15.739791 containerd[1482]: 2026-04-21 09:59:15.720 [INFO][5507] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="d50ced9d536f67c6cec3a4c3bf4e9e3fbacbdfe0dc7f764a964d798a44792183" HandleID="k8s-pod-network.d50ced9d536f67c6cec3a4c3bf4e9e3fbacbdfe0dc7f764a964d798a44792183" Workload="ci--4081--3--7--d--6a70a4c656-k8s-calico--kube--controllers--cfd49cd7d--twdnt-eth0" Apr 21 09:59:15.739791 containerd[1482]: 2026-04-21 09:59:15.721 [INFO][5507] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 09:59:15.739791 containerd[1482]: 2026-04-21 09:59:15.721 [INFO][5507] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 09:59:15.739791 containerd[1482]: 2026-04-21 09:59:15.733 [WARNING][5507] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="d50ced9d536f67c6cec3a4c3bf4e9e3fbacbdfe0dc7f764a964d798a44792183" HandleID="k8s-pod-network.d50ced9d536f67c6cec3a4c3bf4e9e3fbacbdfe0dc7f764a964d798a44792183" Workload="ci--4081--3--7--d--6a70a4c656-k8s-calico--kube--controllers--cfd49cd7d--twdnt-eth0" Apr 21 09:59:15.739791 containerd[1482]: 2026-04-21 09:59:15.733 [INFO][5507] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="d50ced9d536f67c6cec3a4c3bf4e9e3fbacbdfe0dc7f764a964d798a44792183" HandleID="k8s-pod-network.d50ced9d536f67c6cec3a4c3bf4e9e3fbacbdfe0dc7f764a964d798a44792183" Workload="ci--4081--3--7--d--6a70a4c656-k8s-calico--kube--controllers--cfd49cd7d--twdnt-eth0" Apr 21 09:59:15.739791 containerd[1482]: 2026-04-21 09:59:15.735 [INFO][5507] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 09:59:15.739791 containerd[1482]: 2026-04-21 09:59:15.737 [INFO][5499] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="d50ced9d536f67c6cec3a4c3bf4e9e3fbacbdfe0dc7f764a964d798a44792183" Apr 21 09:59:15.740399 containerd[1482]: time="2026-04-21T09:59:15.740357245Z" level=info msg="TearDown network for sandbox \"d50ced9d536f67c6cec3a4c3bf4e9e3fbacbdfe0dc7f764a964d798a44792183\" successfully" Apr 21 09:59:15.740474 containerd[1482]: time="2026-04-21T09:59:15.740458866Z" level=info msg="StopPodSandbox for \"d50ced9d536f67c6cec3a4c3bf4e9e3fbacbdfe0dc7f764a964d798a44792183\" returns successfully" Apr 21 09:59:15.741144 containerd[1482]: time="2026-04-21T09:59:15.741117842Z" level=info msg="RemovePodSandbox for \"d50ced9d536f67c6cec3a4c3bf4e9e3fbacbdfe0dc7f764a964d798a44792183\"" Apr 21 09:59:15.741476 containerd[1482]: time="2026-04-21T09:59:15.741299479Z" level=info msg="Forcibly stopping sandbox \"d50ced9d536f67c6cec3a4c3bf4e9e3fbacbdfe0dc7f764a964d798a44792183\"" Apr 21 09:59:15.828878 containerd[1482]: 2026-04-21 09:59:15.782 [WARNING][5521] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d50ced9d536f67c6cec3a4c3bf4e9e3fbacbdfe0dc7f764a964d798a44792183" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--d--6a70a4c656-k8s-calico--kube--controllers--cfd49cd7d--twdnt-eth0", GenerateName:"calico-kube-controllers-cfd49cd7d-", Namespace:"calico-system", SelfLink:"", UID:"076dd66b-5522-4ecd-a955-84e896c699d3", ResourceVersion:"1138", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 9, 58, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"cfd49cd7d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-d-6a70a4c656", ContainerID:"19100130d0bc745e092d447c5e0bff3bc54fb00a412dec54e7a25f310624f358", Pod:"calico-kube-controllers-cfd49cd7d-twdnt", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.22.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali49d3852e823", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 09:59:15.828878 containerd[1482]: 2026-04-21 09:59:15.782 [INFO][5521] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="d50ced9d536f67c6cec3a4c3bf4e9e3fbacbdfe0dc7f764a964d798a44792183" Apr 21 09:59:15.828878 containerd[1482]: 2026-04-21 09:59:15.782 [INFO][5521] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d50ced9d536f67c6cec3a4c3bf4e9e3fbacbdfe0dc7f764a964d798a44792183" iface="eth0" netns="" Apr 21 09:59:15.828878 containerd[1482]: 2026-04-21 09:59:15.782 [INFO][5521] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="d50ced9d536f67c6cec3a4c3bf4e9e3fbacbdfe0dc7f764a964d798a44792183" Apr 21 09:59:15.828878 containerd[1482]: 2026-04-21 09:59:15.782 [INFO][5521] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="d50ced9d536f67c6cec3a4c3bf4e9e3fbacbdfe0dc7f764a964d798a44792183" Apr 21 09:59:15.828878 containerd[1482]: 2026-04-21 09:59:15.807 [INFO][5528] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="d50ced9d536f67c6cec3a4c3bf4e9e3fbacbdfe0dc7f764a964d798a44792183" HandleID="k8s-pod-network.d50ced9d536f67c6cec3a4c3bf4e9e3fbacbdfe0dc7f764a964d798a44792183" Workload="ci--4081--3--7--d--6a70a4c656-k8s-calico--kube--controllers--cfd49cd7d--twdnt-eth0" Apr 21 09:59:15.828878 containerd[1482]: 2026-04-21 09:59:15.807 [INFO][5528] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 09:59:15.828878 containerd[1482]: 2026-04-21 09:59:15.807 [INFO][5528] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 09:59:15.828878 containerd[1482]: 2026-04-21 09:59:15.820 [WARNING][5528] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="d50ced9d536f67c6cec3a4c3bf4e9e3fbacbdfe0dc7f764a964d798a44792183" HandleID="k8s-pod-network.d50ced9d536f67c6cec3a4c3bf4e9e3fbacbdfe0dc7f764a964d798a44792183" Workload="ci--4081--3--7--d--6a70a4c656-k8s-calico--kube--controllers--cfd49cd7d--twdnt-eth0" Apr 21 09:59:15.828878 containerd[1482]: 2026-04-21 09:59:15.821 [INFO][5528] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="d50ced9d536f67c6cec3a4c3bf4e9e3fbacbdfe0dc7f764a964d798a44792183" HandleID="k8s-pod-network.d50ced9d536f67c6cec3a4c3bf4e9e3fbacbdfe0dc7f764a964d798a44792183" Workload="ci--4081--3--7--d--6a70a4c656-k8s-calico--kube--controllers--cfd49cd7d--twdnt-eth0" Apr 21 09:59:15.828878 containerd[1482]: 2026-04-21 09:59:15.824 [INFO][5528] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 09:59:15.828878 containerd[1482]: 2026-04-21 09:59:15.826 [INFO][5521] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="d50ced9d536f67c6cec3a4c3bf4e9e3fbacbdfe0dc7f764a964d798a44792183" Apr 21 09:59:15.828878 containerd[1482]: time="2026-04-21T09:59:15.827981550Z" level=info msg="TearDown network for sandbox \"d50ced9d536f67c6cec3a4c3bf4e9e3fbacbdfe0dc7f764a964d798a44792183\" successfully" Apr 21 09:59:15.832868 containerd[1482]: time="2026-04-21T09:59:15.832769897Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d50ced9d536f67c6cec3a4c3bf4e9e3fbacbdfe0dc7f764a964d798a44792183\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 21 09:59:15.833025 containerd[1482]: time="2026-04-21T09:59:15.832908046Z" level=info msg="RemovePodSandbox \"d50ced9d536f67c6cec3a4c3bf4e9e3fbacbdfe0dc7f764a964d798a44792183\" returns successfully" Apr 21 09:59:15.834059 containerd[1482]: time="2026-04-21T09:59:15.833584185Z" level=info msg="StopPodSandbox for \"b1d887e631030b9f9b8679ed34f334f785cab61d894b2adba1b41f0ca8515e35\"" Apr 21 09:59:15.917105 containerd[1482]: 2026-04-21 09:59:15.873 [WARNING][5542] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b1d887e631030b9f9b8679ed34f334f785cab61d894b2adba1b41f0ca8515e35" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--d--6a70a4c656-k8s-goldmane--5b85766d88--hfxp8-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"cb94aaf5-d4ea-43a3-a742-62679734ea77", ResourceVersion:"1105", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 9, 58, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-d-6a70a4c656", ContainerID:"b4500745d7a09a8dca5eba4a7392fc75689cce9e76ab197aa0f4b7393f18cd2d", Pod:"goldmane-5b85766d88-hfxp8", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.22.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali29d65add248", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 09:59:15.917105 containerd[1482]: 2026-04-21 09:59:15.874 [INFO][5542] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="b1d887e631030b9f9b8679ed34f334f785cab61d894b2adba1b41f0ca8515e35" Apr 21 09:59:15.917105 containerd[1482]: 2026-04-21 09:59:15.874 [INFO][5542] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b1d887e631030b9f9b8679ed34f334f785cab61d894b2adba1b41f0ca8515e35" iface="eth0" netns="" Apr 21 09:59:15.917105 containerd[1482]: 2026-04-21 09:59:15.874 [INFO][5542] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="b1d887e631030b9f9b8679ed34f334f785cab61d894b2adba1b41f0ca8515e35" Apr 21 09:59:15.917105 containerd[1482]: 2026-04-21 09:59:15.874 [INFO][5542] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="b1d887e631030b9f9b8679ed34f334f785cab61d894b2adba1b41f0ca8515e35" Apr 21 09:59:15.917105 containerd[1482]: 2026-04-21 09:59:15.894 [INFO][5549] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="b1d887e631030b9f9b8679ed34f334f785cab61d894b2adba1b41f0ca8515e35" HandleID="k8s-pod-network.b1d887e631030b9f9b8679ed34f334f785cab61d894b2adba1b41f0ca8515e35" Workload="ci--4081--3--7--d--6a70a4c656-k8s-goldmane--5b85766d88--hfxp8-eth0" Apr 21 09:59:15.917105 containerd[1482]: 2026-04-21 09:59:15.895 [INFO][5549] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 09:59:15.917105 containerd[1482]: 2026-04-21 09:59:15.895 [INFO][5549] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 09:59:15.917105 containerd[1482]: 2026-04-21 09:59:15.910 [WARNING][5549] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="b1d887e631030b9f9b8679ed34f334f785cab61d894b2adba1b41f0ca8515e35" HandleID="k8s-pod-network.b1d887e631030b9f9b8679ed34f334f785cab61d894b2adba1b41f0ca8515e35" Workload="ci--4081--3--7--d--6a70a4c656-k8s-goldmane--5b85766d88--hfxp8-eth0" Apr 21 09:59:15.917105 containerd[1482]: 2026-04-21 09:59:15.910 [INFO][5549] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="b1d887e631030b9f9b8679ed34f334f785cab61d894b2adba1b41f0ca8515e35" HandleID="k8s-pod-network.b1d887e631030b9f9b8679ed34f334f785cab61d894b2adba1b41f0ca8515e35" Workload="ci--4081--3--7--d--6a70a4c656-k8s-goldmane--5b85766d88--hfxp8-eth0" Apr 21 09:59:15.917105 containerd[1482]: 2026-04-21 09:59:15.912 [INFO][5549] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 09:59:15.917105 containerd[1482]: 2026-04-21 09:59:15.915 [INFO][5542] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="b1d887e631030b9f9b8679ed34f334f785cab61d894b2adba1b41f0ca8515e35" Apr 21 09:59:15.917978 containerd[1482]: time="2026-04-21T09:59:15.917411867Z" level=info msg="TearDown network for sandbox \"b1d887e631030b9f9b8679ed34f334f785cab61d894b2adba1b41f0ca8515e35\" successfully" Apr 21 09:59:15.917978 containerd[1482]: time="2026-04-21T09:59:15.917734453Z" level=info msg="StopPodSandbox for \"b1d887e631030b9f9b8679ed34f334f785cab61d894b2adba1b41f0ca8515e35\" returns successfully" Apr 21 09:59:15.918897 containerd[1482]: time="2026-04-21T09:59:15.918514814Z" level=info msg="RemovePodSandbox for \"b1d887e631030b9f9b8679ed34f334f785cab61d894b2adba1b41f0ca8515e35\"" Apr 21 09:59:15.918897 containerd[1482]: time="2026-04-21T09:59:15.918545501Z" level=info msg="Forcibly stopping sandbox \"b1d887e631030b9f9b8679ed34f334f785cab61d894b2adba1b41f0ca8515e35\"" Apr 21 09:59:16.005155 containerd[1482]: 2026-04-21 09:59:15.959 [WARNING][5563] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b1d887e631030b9f9b8679ed34f334f785cab61d894b2adba1b41f0ca8515e35" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--d--6a70a4c656-k8s-goldmane--5b85766d88--hfxp8-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"cb94aaf5-d4ea-43a3-a742-62679734ea77", ResourceVersion:"1105", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 9, 58, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-d-6a70a4c656", ContainerID:"b4500745d7a09a8dca5eba4a7392fc75689cce9e76ab197aa0f4b7393f18cd2d", Pod:"goldmane-5b85766d88-hfxp8", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.22.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali29d65add248", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 09:59:16.005155 containerd[1482]: 2026-04-21 09:59:15.960 [INFO][5563] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="b1d887e631030b9f9b8679ed34f334f785cab61d894b2adba1b41f0ca8515e35" Apr 21 09:59:16.005155 containerd[1482]: 2026-04-21 09:59:15.960 [INFO][5563] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b1d887e631030b9f9b8679ed34f334f785cab61d894b2adba1b41f0ca8515e35" iface="eth0" netns="" Apr 21 09:59:16.005155 containerd[1482]: 2026-04-21 09:59:15.960 [INFO][5563] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="b1d887e631030b9f9b8679ed34f334f785cab61d894b2adba1b41f0ca8515e35" Apr 21 09:59:16.005155 containerd[1482]: 2026-04-21 09:59:15.960 [INFO][5563] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="b1d887e631030b9f9b8679ed34f334f785cab61d894b2adba1b41f0ca8515e35" Apr 21 09:59:16.005155 containerd[1482]: 2026-04-21 09:59:15.983 [INFO][5570] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="b1d887e631030b9f9b8679ed34f334f785cab61d894b2adba1b41f0ca8515e35" HandleID="k8s-pod-network.b1d887e631030b9f9b8679ed34f334f785cab61d894b2adba1b41f0ca8515e35" Workload="ci--4081--3--7--d--6a70a4c656-k8s-goldmane--5b85766d88--hfxp8-eth0" Apr 21 09:59:16.005155 containerd[1482]: 2026-04-21 09:59:15.983 [INFO][5570] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 09:59:16.005155 containerd[1482]: 2026-04-21 09:59:15.983 [INFO][5570] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 09:59:16.005155 containerd[1482]: 2026-04-21 09:59:15.996 [WARNING][5570] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="b1d887e631030b9f9b8679ed34f334f785cab61d894b2adba1b41f0ca8515e35" HandleID="k8s-pod-network.b1d887e631030b9f9b8679ed34f334f785cab61d894b2adba1b41f0ca8515e35" Workload="ci--4081--3--7--d--6a70a4c656-k8s-goldmane--5b85766d88--hfxp8-eth0" Apr 21 09:59:16.005155 containerd[1482]: 2026-04-21 09:59:15.996 [INFO][5570] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="b1d887e631030b9f9b8679ed34f334f785cab61d894b2adba1b41f0ca8515e35" HandleID="k8s-pod-network.b1d887e631030b9f9b8679ed34f334f785cab61d894b2adba1b41f0ca8515e35" Workload="ci--4081--3--7--d--6a70a4c656-k8s-goldmane--5b85766d88--hfxp8-eth0" Apr 21 09:59:16.005155 containerd[1482]: 2026-04-21 09:59:15.998 [INFO][5570] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 09:59:16.005155 containerd[1482]: 2026-04-21 09:59:16.002 [INFO][5563] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="b1d887e631030b9f9b8679ed34f334f785cab61d894b2adba1b41f0ca8515e35" Apr 21 09:59:16.005580 containerd[1482]: time="2026-04-21T09:59:16.005201806Z" level=info msg="TearDown network for sandbox \"b1d887e631030b9f9b8679ed34f334f785cab61d894b2adba1b41f0ca8515e35\" successfully" Apr 21 09:59:16.009599 containerd[1482]: time="2026-04-21T09:59:16.009546421Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b1d887e631030b9f9b8679ed34f334f785cab61d894b2adba1b41f0ca8515e35\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 21 09:59:16.009787 containerd[1482]: time="2026-04-21T09:59:16.009641800Z" level=info msg="RemovePodSandbox \"b1d887e631030b9f9b8679ed34f334f785cab61d894b2adba1b41f0ca8515e35\" returns successfully" Apr 21 09:59:16.010504 containerd[1482]: time="2026-04-21T09:59:16.010205031Z" level=info msg="StopPodSandbox for \"0aed76e5ecaf297b2736f954eb1d225b925d99cef754bef7e65c92a81b3d42da\"" Apr 21 09:59:16.101679 containerd[1482]: 2026-04-21 09:59:16.055 [WARNING][5585] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0aed76e5ecaf297b2736f954eb1d225b925d99cef754bef7e65c92a81b3d42da" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--d--6a70a4c656-k8s-calico--apiserver--b79cb9f78--rwr6l-eth0", GenerateName:"calico-apiserver-b79cb9f78-", Namespace:"calico-system", SelfLink:"", UID:"43fc62f9-80c2-420c-b7a3-69da00af4f8f", ResourceVersion:"1047", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 9, 58, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"b79cb9f78", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-d-6a70a4c656", ContainerID:"4b91994e66d61eb26371f9c8c9811d630240aa81013f001ade6da763a479f959", Pod:"calico-apiserver-b79cb9f78-rwr6l", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.22.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali860362aca7a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 09:59:16.101679 containerd[1482]: 2026-04-21 09:59:16.056 [INFO][5585] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="0aed76e5ecaf297b2736f954eb1d225b925d99cef754bef7e65c92a81b3d42da" Apr 21 09:59:16.101679 containerd[1482]: 2026-04-21 09:59:16.056 [INFO][5585] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0aed76e5ecaf297b2736f954eb1d225b925d99cef754bef7e65c92a81b3d42da" iface="eth0" netns="" Apr 21 09:59:16.101679 containerd[1482]: 2026-04-21 09:59:16.056 [INFO][5585] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="0aed76e5ecaf297b2736f954eb1d225b925d99cef754bef7e65c92a81b3d42da" Apr 21 09:59:16.101679 containerd[1482]: 2026-04-21 09:59:16.056 [INFO][5585] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="0aed76e5ecaf297b2736f954eb1d225b925d99cef754bef7e65c92a81b3d42da" Apr 21 09:59:16.101679 containerd[1482]: 2026-04-21 09:59:16.080 [INFO][5592] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="0aed76e5ecaf297b2736f954eb1d225b925d99cef754bef7e65c92a81b3d42da" HandleID="k8s-pod-network.0aed76e5ecaf297b2736f954eb1d225b925d99cef754bef7e65c92a81b3d42da" Workload="ci--4081--3--7--d--6a70a4c656-k8s-calico--apiserver--b79cb9f78--rwr6l-eth0" Apr 21 09:59:16.101679 containerd[1482]: 2026-04-21 09:59:16.080 [INFO][5592] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 09:59:16.101679 containerd[1482]: 2026-04-21 09:59:16.080 [INFO][5592] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 09:59:16.101679 containerd[1482]: 2026-04-21 09:59:16.094 [WARNING][5592] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="0aed76e5ecaf297b2736f954eb1d225b925d99cef754bef7e65c92a81b3d42da" HandleID="k8s-pod-network.0aed76e5ecaf297b2736f954eb1d225b925d99cef754bef7e65c92a81b3d42da" Workload="ci--4081--3--7--d--6a70a4c656-k8s-calico--apiserver--b79cb9f78--rwr6l-eth0" Apr 21 09:59:16.101679 containerd[1482]: 2026-04-21 09:59:16.094 [INFO][5592] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="0aed76e5ecaf297b2736f954eb1d225b925d99cef754bef7e65c92a81b3d42da" HandleID="k8s-pod-network.0aed76e5ecaf297b2736f954eb1d225b925d99cef754bef7e65c92a81b3d42da" Workload="ci--4081--3--7--d--6a70a4c656-k8s-calico--apiserver--b79cb9f78--rwr6l-eth0" Apr 21 09:59:16.101679 containerd[1482]: 2026-04-21 09:59:16.097 [INFO][5592] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 09:59:16.101679 containerd[1482]: 2026-04-21 09:59:16.099 [INFO][5585] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="0aed76e5ecaf297b2736f954eb1d225b925d99cef754bef7e65c92a81b3d42da" Apr 21 09:59:16.102980 containerd[1482]: time="2026-04-21T09:59:16.101653362Z" level=info msg="TearDown network for sandbox \"0aed76e5ecaf297b2736f954eb1d225b925d99cef754bef7e65c92a81b3d42da\" successfully" Apr 21 09:59:16.102980 containerd[1482]: time="2026-04-21T09:59:16.102706650Z" level=info msg="StopPodSandbox for \"0aed76e5ecaf297b2736f954eb1d225b925d99cef754bef7e65c92a81b3d42da\" returns successfully" Apr 21 09:59:16.103640 containerd[1482]: time="2026-04-21T09:59:16.103608707Z" level=info msg="RemovePodSandbox for \"0aed76e5ecaf297b2736f954eb1d225b925d99cef754bef7e65c92a81b3d42da\"" Apr 21 09:59:16.103869 containerd[1482]: time="2026-04-21T09:59:16.103845794Z" level=info msg="Forcibly stopping sandbox \"0aed76e5ecaf297b2736f954eb1d225b925d99cef754bef7e65c92a81b3d42da\"" Apr 21 09:59:16.202638 containerd[1482]: 2026-04-21 09:59:16.159 [WARNING][5606] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0aed76e5ecaf297b2736f954eb1d225b925d99cef754bef7e65c92a81b3d42da" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--d--6a70a4c656-k8s-calico--apiserver--b79cb9f78--rwr6l-eth0", GenerateName:"calico-apiserver-b79cb9f78-", Namespace:"calico-system", SelfLink:"", UID:"43fc62f9-80c2-420c-b7a3-69da00af4f8f", ResourceVersion:"1047", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 9, 58, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"b79cb9f78", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-d-6a70a4c656", ContainerID:"4b91994e66d61eb26371f9c8c9811d630240aa81013f001ade6da763a479f959", Pod:"calico-apiserver-b79cb9f78-rwr6l", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.22.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali860362aca7a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 09:59:16.202638 containerd[1482]: 2026-04-21 09:59:16.159 [INFO][5606] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="0aed76e5ecaf297b2736f954eb1d225b925d99cef754bef7e65c92a81b3d42da" Apr 21 09:59:16.202638 containerd[1482]: 2026-04-21 09:59:16.159 [INFO][5606] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0aed76e5ecaf297b2736f954eb1d225b925d99cef754bef7e65c92a81b3d42da" iface="eth0" netns="" Apr 21 09:59:16.202638 containerd[1482]: 2026-04-21 09:59:16.159 [INFO][5606] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="0aed76e5ecaf297b2736f954eb1d225b925d99cef754bef7e65c92a81b3d42da" Apr 21 09:59:16.202638 containerd[1482]: 2026-04-21 09:59:16.159 [INFO][5606] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="0aed76e5ecaf297b2736f954eb1d225b925d99cef754bef7e65c92a81b3d42da" Apr 21 09:59:16.202638 containerd[1482]: 2026-04-21 09:59:16.183 [INFO][5613] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="0aed76e5ecaf297b2736f954eb1d225b925d99cef754bef7e65c92a81b3d42da" HandleID="k8s-pod-network.0aed76e5ecaf297b2736f954eb1d225b925d99cef754bef7e65c92a81b3d42da" Workload="ci--4081--3--7--d--6a70a4c656-k8s-calico--apiserver--b79cb9f78--rwr6l-eth0" Apr 21 09:59:16.202638 containerd[1482]: 2026-04-21 09:59:16.184 [INFO][5613] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 09:59:16.202638 containerd[1482]: 2026-04-21 09:59:16.184 [INFO][5613] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 09:59:16.202638 containerd[1482]: 2026-04-21 09:59:16.195 [WARNING][5613] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="0aed76e5ecaf297b2736f954eb1d225b925d99cef754bef7e65c92a81b3d42da" HandleID="k8s-pod-network.0aed76e5ecaf297b2736f954eb1d225b925d99cef754bef7e65c92a81b3d42da" Workload="ci--4081--3--7--d--6a70a4c656-k8s-calico--apiserver--b79cb9f78--rwr6l-eth0" Apr 21 09:59:16.202638 containerd[1482]: 2026-04-21 09:59:16.195 [INFO][5613] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="0aed76e5ecaf297b2736f954eb1d225b925d99cef754bef7e65c92a81b3d42da" HandleID="k8s-pod-network.0aed76e5ecaf297b2736f954eb1d225b925d99cef754bef7e65c92a81b3d42da" Workload="ci--4081--3--7--d--6a70a4c656-k8s-calico--apiserver--b79cb9f78--rwr6l-eth0" Apr 21 09:59:16.202638 containerd[1482]: 2026-04-21 09:59:16.197 [INFO][5613] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 09:59:16.202638 containerd[1482]: 2026-04-21 09:59:16.200 [INFO][5606] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="0aed76e5ecaf297b2736f954eb1d225b925d99cef754bef7e65c92a81b3d42da" Apr 21 09:59:16.203356 containerd[1482]: time="2026-04-21T09:59:16.202679900Z" level=info msg="TearDown network for sandbox \"0aed76e5ecaf297b2736f954eb1d225b925d99cef754bef7e65c92a81b3d42da\" successfully" Apr 21 09:59:16.208208 containerd[1482]: time="2026-04-21T09:59:16.208112010Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0aed76e5ecaf297b2736f954eb1d225b925d99cef754bef7e65c92a81b3d42da\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 21 09:59:16.208378 containerd[1482]: time="2026-04-21T09:59:16.208279923Z" level=info msg="RemovePodSandbox \"0aed76e5ecaf297b2736f954eb1d225b925d99cef754bef7e65c92a81b3d42da\" returns successfully" Apr 21 09:59:16.209167 containerd[1482]: time="2026-04-21T09:59:16.208891603Z" level=info msg="StopPodSandbox for \"639a90f1031a191cb00e70a220d80a3b4b71dbf44d915b61414a094f3d38903f\"" Apr 21 09:59:16.300991 containerd[1482]: 2026-04-21 09:59:16.253 [WARNING][5627] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="639a90f1031a191cb00e70a220d80a3b4b71dbf44d915b61414a094f3d38903f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--d--6a70a4c656-k8s-coredns--674b8bbfcf--598qp-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"7106ec2c-c759-48f7-8dd6-aae54768ee28", ResourceVersion:"1021", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 9, 58, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-d-6a70a4c656", ContainerID:"510f276c9806d88b8a2daca5f5ed11af2e580720665ed121b1cf56a6eb1a1437", Pod:"coredns-674b8bbfcf-598qp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.22.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali57d6176e888", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 09:59:16.300991 containerd[1482]: 2026-04-21 09:59:16.253 [INFO][5627] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="639a90f1031a191cb00e70a220d80a3b4b71dbf44d915b61414a094f3d38903f" Apr 21 09:59:16.300991 containerd[1482]: 2026-04-21 09:59:16.253 [INFO][5627] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="639a90f1031a191cb00e70a220d80a3b4b71dbf44d915b61414a094f3d38903f" iface="eth0" netns="" Apr 21 09:59:16.300991 containerd[1482]: 2026-04-21 09:59:16.253 [INFO][5627] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="639a90f1031a191cb00e70a220d80a3b4b71dbf44d915b61414a094f3d38903f" Apr 21 09:59:16.300991 containerd[1482]: 2026-04-21 09:59:16.253 [INFO][5627] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="639a90f1031a191cb00e70a220d80a3b4b71dbf44d915b61414a094f3d38903f" Apr 21 09:59:16.300991 containerd[1482]: 2026-04-21 09:59:16.277 [INFO][5635] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="639a90f1031a191cb00e70a220d80a3b4b71dbf44d915b61414a094f3d38903f" HandleID="k8s-pod-network.639a90f1031a191cb00e70a220d80a3b4b71dbf44d915b61414a094f3d38903f" Workload="ci--4081--3--7--d--6a70a4c656-k8s-coredns--674b8bbfcf--598qp-eth0" Apr 21 09:59:16.300991 containerd[1482]: 2026-04-21 09:59:16.278 [INFO][5635] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 09:59:16.300991 containerd[1482]: 2026-04-21 09:59:16.278 [INFO][5635] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 09:59:16.300991 containerd[1482]: 2026-04-21 09:59:16.288 [WARNING][5635] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="639a90f1031a191cb00e70a220d80a3b4b71dbf44d915b61414a094f3d38903f" HandleID="k8s-pod-network.639a90f1031a191cb00e70a220d80a3b4b71dbf44d915b61414a094f3d38903f" Workload="ci--4081--3--7--d--6a70a4c656-k8s-coredns--674b8bbfcf--598qp-eth0" Apr 21 09:59:16.300991 containerd[1482]: 2026-04-21 09:59:16.288 [INFO][5635] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="639a90f1031a191cb00e70a220d80a3b4b71dbf44d915b61414a094f3d38903f" HandleID="k8s-pod-network.639a90f1031a191cb00e70a220d80a3b4b71dbf44d915b61414a094f3d38903f" Workload="ci--4081--3--7--d--6a70a4c656-k8s-coredns--674b8bbfcf--598qp-eth0" Apr 21 09:59:16.300991 containerd[1482]: 2026-04-21 09:59:16.290 [INFO][5635] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 09:59:16.300991 containerd[1482]: 2026-04-21 09:59:16.293 [INFO][5627] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="639a90f1031a191cb00e70a220d80a3b4b71dbf44d915b61414a094f3d38903f" Apr 21 09:59:16.301973 containerd[1482]: time="2026-04-21T09:59:16.301689520Z" level=info msg="TearDown network for sandbox \"639a90f1031a191cb00e70a220d80a3b4b71dbf44d915b61414a094f3d38903f\" successfully" Apr 21 09:59:16.301973 containerd[1482]: time="2026-04-21T09:59:16.301719366Z" level=info msg="StopPodSandbox for \"639a90f1031a191cb00e70a220d80a3b4b71dbf44d915b61414a094f3d38903f\" returns successfully" Apr 21 09:59:16.302463 containerd[1482]: time="2026-04-21T09:59:16.302174936Z" level=info msg="RemovePodSandbox for \"639a90f1031a191cb00e70a220d80a3b4b71dbf44d915b61414a094f3d38903f\"" Apr 21 09:59:16.302463 containerd[1482]: time="2026-04-21T09:59:16.302202021Z" level=info msg="Forcibly stopping sandbox \"639a90f1031a191cb00e70a220d80a3b4b71dbf44d915b61414a094f3d38903f\"" Apr 21 09:59:16.322562 systemd[1]: Started sshd@8-178.104.221.144:22-50.85.169.122:51890.service - OpenSSH per-connection server daemon (50.85.169.122:51890). Apr 21 09:59:16.395791 containerd[1482]: 2026-04-21 09:59:16.352 [WARNING][5649] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="639a90f1031a191cb00e70a220d80a3b4b71dbf44d915b61414a094f3d38903f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--d--6a70a4c656-k8s-coredns--674b8bbfcf--598qp-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"7106ec2c-c759-48f7-8dd6-aae54768ee28", ResourceVersion:"1021", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 9, 58, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-d-6a70a4c656", ContainerID:"510f276c9806d88b8a2daca5f5ed11af2e580720665ed121b1cf56a6eb1a1437", Pod:"coredns-674b8bbfcf-598qp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.22.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali57d6176e888", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 09:59:16.395791 containerd[1482]: 2026-04-21 09:59:16.352 [INFO][5649] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="639a90f1031a191cb00e70a220d80a3b4b71dbf44d915b61414a094f3d38903f" Apr 21 09:59:16.395791 containerd[1482]: 2026-04-21 09:59:16.352 [INFO][5649] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="639a90f1031a191cb00e70a220d80a3b4b71dbf44d915b61414a094f3d38903f" iface="eth0" netns="" Apr 21 09:59:16.395791 containerd[1482]: 2026-04-21 09:59:16.352 [INFO][5649] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="639a90f1031a191cb00e70a220d80a3b4b71dbf44d915b61414a094f3d38903f" Apr 21 09:59:16.395791 containerd[1482]: 2026-04-21 09:59:16.352 [INFO][5649] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="639a90f1031a191cb00e70a220d80a3b4b71dbf44d915b61414a094f3d38903f" Apr 21 09:59:16.395791 containerd[1482]: 2026-04-21 09:59:16.373 [INFO][5660] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="639a90f1031a191cb00e70a220d80a3b4b71dbf44d915b61414a094f3d38903f" HandleID="k8s-pod-network.639a90f1031a191cb00e70a220d80a3b4b71dbf44d915b61414a094f3d38903f" Workload="ci--4081--3--7--d--6a70a4c656-k8s-coredns--674b8bbfcf--598qp-eth0" Apr 21 09:59:16.395791 containerd[1482]: 2026-04-21 09:59:16.373 [INFO][5660] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 09:59:16.395791 containerd[1482]: 2026-04-21 09:59:16.373 [INFO][5660] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 09:59:16.395791 containerd[1482]: 2026-04-21 09:59:16.386 [WARNING][5660] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="639a90f1031a191cb00e70a220d80a3b4b71dbf44d915b61414a094f3d38903f" HandleID="k8s-pod-network.639a90f1031a191cb00e70a220d80a3b4b71dbf44d915b61414a094f3d38903f" Workload="ci--4081--3--7--d--6a70a4c656-k8s-coredns--674b8bbfcf--598qp-eth0" Apr 21 09:59:16.395791 containerd[1482]: 2026-04-21 09:59:16.386 [INFO][5660] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="639a90f1031a191cb00e70a220d80a3b4b71dbf44d915b61414a094f3d38903f" HandleID="k8s-pod-network.639a90f1031a191cb00e70a220d80a3b4b71dbf44d915b61414a094f3d38903f" Workload="ci--4081--3--7--d--6a70a4c656-k8s-coredns--674b8bbfcf--598qp-eth0" Apr 21 09:59:16.395791 containerd[1482]: 2026-04-21 09:59:16.389 [INFO][5660] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 09:59:16.395791 containerd[1482]: 2026-04-21 09:59:16.392 [INFO][5649] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="639a90f1031a191cb00e70a220d80a3b4b71dbf44d915b61414a094f3d38903f" Apr 21 09:59:16.395791 containerd[1482]: time="2026-04-21T09:59:16.395039706Z" level=info msg="TearDown network for sandbox \"639a90f1031a191cb00e70a220d80a3b4b71dbf44d915b61414a094f3d38903f\" successfully" Apr 21 09:59:16.399892 containerd[1482]: time="2026-04-21T09:59:16.399851414Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"639a90f1031a191cb00e70a220d80a3b4b71dbf44d915b61414a094f3d38903f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 21 09:59:16.399997 containerd[1482]: time="2026-04-21T09:59:16.399924468Z" level=info msg="RemovePodSandbox \"639a90f1031a191cb00e70a220d80a3b4b71dbf44d915b61414a094f3d38903f\" returns successfully" Apr 21 09:59:16.400740 containerd[1482]: time="2026-04-21T09:59:16.400418326Z" level=info msg="StopPodSandbox for \"884cd7957d9fda0434bd864702ea9779053a5698d779236dd1d3087362524ca8\"" Apr 21 09:59:16.457156 sshd[5654]: Accepted publickey for core from 50.85.169.122 port 51890 ssh2: RSA SHA256:H2GDHYMb+1VDhh8fYRULGIeGI6zEpuvWNbrKKWv7l+g Apr 21 09:59:16.459701 sshd[5654]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 09:59:16.467881 systemd-logind[1452]: New session 9 of user core. Apr 21 09:59:16.471268 systemd[1]: Started session-9.scope - Session 9 of User core. Apr 21 09:59:16.494439 containerd[1482]: 2026-04-21 09:59:16.443 [WARNING][5674] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="884cd7957d9fda0434bd864702ea9779053a5698d779236dd1d3087362524ca8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--d--6a70a4c656-k8s-coredns--674b8bbfcf--nm2rg-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"b02e60bd-7bf5-4d67-90f0-2644154eb8f8", ResourceVersion:"1010", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 9, 58, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-d-6a70a4c656", ContainerID:"5180b11b23b9c87ab89cb693347b64082e48a82a37aeb881846fc64df5c48136", Pod:"coredns-674b8bbfcf-nm2rg", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.22.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali27bd019bd09", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 09:59:16.494439 containerd[1482]: 2026-04-21 09:59:16.444 [INFO][5674] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="884cd7957d9fda0434bd864702ea9779053a5698d779236dd1d3087362524ca8" Apr 21 09:59:16.494439 containerd[1482]: 2026-04-21 09:59:16.444 [INFO][5674] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="884cd7957d9fda0434bd864702ea9779053a5698d779236dd1d3087362524ca8" iface="eth0" netns="" Apr 21 09:59:16.494439 containerd[1482]: 2026-04-21 09:59:16.444 [INFO][5674] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="884cd7957d9fda0434bd864702ea9779053a5698d779236dd1d3087362524ca8" Apr 21 09:59:16.494439 containerd[1482]: 2026-04-21 09:59:16.444 [INFO][5674] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="884cd7957d9fda0434bd864702ea9779053a5698d779236dd1d3087362524ca8" Apr 21 09:59:16.494439 containerd[1482]: 2026-04-21 09:59:16.476 [INFO][5682] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="884cd7957d9fda0434bd864702ea9779053a5698d779236dd1d3087362524ca8" HandleID="k8s-pod-network.884cd7957d9fda0434bd864702ea9779053a5698d779236dd1d3087362524ca8" Workload="ci--4081--3--7--d--6a70a4c656-k8s-coredns--674b8bbfcf--nm2rg-eth0" Apr 21 09:59:16.494439 containerd[1482]: 2026-04-21 09:59:16.477 [INFO][5682] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 09:59:16.494439 containerd[1482]: 2026-04-21 09:59:16.477 [INFO][5682] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 09:59:16.494439 containerd[1482]: 2026-04-21 09:59:16.488 [WARNING][5682] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="884cd7957d9fda0434bd864702ea9779053a5698d779236dd1d3087362524ca8" HandleID="k8s-pod-network.884cd7957d9fda0434bd864702ea9779053a5698d779236dd1d3087362524ca8" Workload="ci--4081--3--7--d--6a70a4c656-k8s-coredns--674b8bbfcf--nm2rg-eth0" Apr 21 09:59:16.494439 containerd[1482]: 2026-04-21 09:59:16.488 [INFO][5682] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="884cd7957d9fda0434bd864702ea9779053a5698d779236dd1d3087362524ca8" HandleID="k8s-pod-network.884cd7957d9fda0434bd864702ea9779053a5698d779236dd1d3087362524ca8" Workload="ci--4081--3--7--d--6a70a4c656-k8s-coredns--674b8bbfcf--nm2rg-eth0" Apr 21 09:59:16.494439 containerd[1482]: 2026-04-21 09:59:16.490 [INFO][5682] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 09:59:16.494439 containerd[1482]: 2026-04-21 09:59:16.492 [INFO][5674] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="884cd7957d9fda0434bd864702ea9779053a5698d779236dd1d3087362524ca8" Apr 21 09:59:16.494941 containerd[1482]: time="2026-04-21T09:59:16.494509497Z" level=info msg="TearDown network for sandbox \"884cd7957d9fda0434bd864702ea9779053a5698d779236dd1d3087362524ca8\" successfully" Apr 21 09:59:16.494941 containerd[1482]: time="2026-04-21T09:59:16.494536503Z" level=info msg="StopPodSandbox for \"884cd7957d9fda0434bd864702ea9779053a5698d779236dd1d3087362524ca8\" returns successfully" Apr 21 09:59:16.495686 containerd[1482]: time="2026-04-21T09:59:16.495331299Z" level=info msg="RemovePodSandbox for \"884cd7957d9fda0434bd864702ea9779053a5698d779236dd1d3087362524ca8\"" Apr 21 09:59:16.495686 containerd[1482]: time="2026-04-21T09:59:16.495362265Z" level=info msg="Forcibly stopping sandbox \"884cd7957d9fda0434bd864702ea9779053a5698d779236dd1d3087362524ca8\"" Apr 21 09:59:16.597838 containerd[1482]: 2026-04-21 09:59:16.535 [WARNING][5697] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="884cd7957d9fda0434bd864702ea9779053a5698d779236dd1d3087362524ca8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--d--6a70a4c656-k8s-coredns--674b8bbfcf--nm2rg-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"b02e60bd-7bf5-4d67-90f0-2644154eb8f8", ResourceVersion:"1010", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 9, 58, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-d-6a70a4c656", ContainerID:"5180b11b23b9c87ab89cb693347b64082e48a82a37aeb881846fc64df5c48136", Pod:"coredns-674b8bbfcf-nm2rg", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.22.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali27bd019bd09", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 09:59:16.597838 containerd[1482]: 2026-04-21 09:59:16.536 [INFO][5697] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="884cd7957d9fda0434bd864702ea9779053a5698d779236dd1d3087362524ca8" Apr 21 09:59:16.597838 containerd[1482]: 2026-04-21 09:59:16.536 [INFO][5697] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="884cd7957d9fda0434bd864702ea9779053a5698d779236dd1d3087362524ca8" iface="eth0" netns="" Apr 21 09:59:16.597838 containerd[1482]: 2026-04-21 09:59:16.536 [INFO][5697] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="884cd7957d9fda0434bd864702ea9779053a5698d779236dd1d3087362524ca8" Apr 21 09:59:16.597838 containerd[1482]: 2026-04-21 09:59:16.536 [INFO][5697] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="884cd7957d9fda0434bd864702ea9779053a5698d779236dd1d3087362524ca8" Apr 21 09:59:16.597838 containerd[1482]: 2026-04-21 09:59:16.576 [INFO][5704] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="884cd7957d9fda0434bd864702ea9779053a5698d779236dd1d3087362524ca8" HandleID="k8s-pod-network.884cd7957d9fda0434bd864702ea9779053a5698d779236dd1d3087362524ca8" Workload="ci--4081--3--7--d--6a70a4c656-k8s-coredns--674b8bbfcf--nm2rg-eth0" Apr 21 09:59:16.597838 containerd[1482]: 2026-04-21 09:59:16.576 [INFO][5704] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 09:59:16.597838 containerd[1482]: 2026-04-21 09:59:16.576 [INFO][5704] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 09:59:16.597838 containerd[1482]: 2026-04-21 09:59:16.591 [WARNING][5704] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="884cd7957d9fda0434bd864702ea9779053a5698d779236dd1d3087362524ca8" HandleID="k8s-pod-network.884cd7957d9fda0434bd864702ea9779053a5698d779236dd1d3087362524ca8" Workload="ci--4081--3--7--d--6a70a4c656-k8s-coredns--674b8bbfcf--nm2rg-eth0" Apr 21 09:59:16.597838 containerd[1482]: 2026-04-21 09:59:16.591 [INFO][5704] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="884cd7957d9fda0434bd864702ea9779053a5698d779236dd1d3087362524ca8" HandleID="k8s-pod-network.884cd7957d9fda0434bd864702ea9779053a5698d779236dd1d3087362524ca8" Workload="ci--4081--3--7--d--6a70a4c656-k8s-coredns--674b8bbfcf--nm2rg-eth0" Apr 21 09:59:16.597838 containerd[1482]: 2026-04-21 09:59:16.593 [INFO][5704] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 09:59:16.597838 containerd[1482]: 2026-04-21 09:59:16.595 [INFO][5697] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="884cd7957d9fda0434bd864702ea9779053a5698d779236dd1d3087362524ca8" Apr 21 09:59:16.599043 containerd[1482]: time="2026-04-21T09:59:16.599003398Z" level=info msg="TearDown network for sandbox \"884cd7957d9fda0434bd864702ea9779053a5698d779236dd1d3087362524ca8\" successfully" Apr 21 09:59:16.603249 containerd[1482]: time="2026-04-21T09:59:16.603176300Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"884cd7957d9fda0434bd864702ea9779053a5698d779236dd1d3087362524ca8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 21 09:59:16.603324 containerd[1482]: time="2026-04-21T09:59:16.603276920Z" level=info msg="RemovePodSandbox \"884cd7957d9fda0434bd864702ea9779053a5698d779236dd1d3087362524ca8\" returns successfully" Apr 21 09:59:16.672484 sshd[5654]: pam_unix(sshd:session): session closed for user core Apr 21 09:59:16.679564 systemd-logind[1452]: Session 9 logged out. Waiting for processes to exit. Apr 21 09:59:16.680015 systemd[1]: sshd@8-178.104.221.144:22-50.85.169.122:51890.service: Deactivated successfully. Apr 21 09:59:16.684306 systemd[1]: session-9.scope: Deactivated successfully. Apr 21 09:59:16.687317 systemd-logind[1452]: Removed session 9. Apr 21 09:59:21.717220 systemd[1]: Started sshd@9-178.104.221.144:22-50.85.169.122:43658.service - OpenSSH per-connection server daemon (50.85.169.122:43658). Apr 21 09:59:21.838208 sshd[5727]: Accepted publickey for core from 50.85.169.122 port 43658 ssh2: RSA SHA256:H2GDHYMb+1VDhh8fYRULGIeGI6zEpuvWNbrKKWv7l+g Apr 21 09:59:21.840772 sshd[5727]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 09:59:21.848236 systemd-logind[1452]: New session 10 of user core. Apr 21 09:59:21.854400 systemd[1]: Started session-10.scope - Session 10 of User core. Apr 21 09:59:22.035356 sshd[5727]: pam_unix(sshd:session): session closed for user core Apr 21 09:59:22.043586 systemd[1]: sshd@9-178.104.221.144:22-50.85.169.122:43658.service: Deactivated successfully. Apr 21 09:59:22.049638 systemd[1]: session-10.scope: Deactivated successfully. Apr 21 09:59:22.052351 systemd-logind[1452]: Session 10 logged out. Waiting for processes to exit. Apr 21 09:59:22.053937 systemd-logind[1452]: Removed session 10. Apr 21 09:59:26.500168 kubelet[2514]: I0421 09:59:26.500081 2514 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 21 09:59:27.069214 systemd[1]: Started sshd@10-178.104.221.144:22-50.85.169.122:43662.service - OpenSSH per-connection server daemon (50.85.169.122:43662). Apr 21 09:59:27.202873 sshd[5765]: Accepted publickey for core from 50.85.169.122 port 43662 ssh2: RSA SHA256:H2GDHYMb+1VDhh8fYRULGIeGI6zEpuvWNbrKKWv7l+g Apr 21 09:59:27.205375 sshd[5765]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 09:59:27.211246 systemd-logind[1452]: New session 11 of user core. Apr 21 09:59:27.220251 systemd[1]: Started session-11.scope - Session 11 of User core. Apr 21 09:59:27.419076 sshd[5765]: pam_unix(sshd:session): session closed for user core Apr 21 09:59:27.425173 systemd[1]: sshd@10-178.104.221.144:22-50.85.169.122:43662.service: Deactivated successfully. Apr 21 09:59:27.429215 systemd[1]: session-11.scope: Deactivated successfully. Apr 21 09:59:27.430877 systemd-logind[1452]: Session 11 logged out. Waiting for processes to exit. Apr 21 09:59:27.448686 systemd[1]: Started sshd@11-178.104.221.144:22-50.85.169.122:43678.service - OpenSSH per-connection server daemon (50.85.169.122:43678). Apr 21 09:59:27.450545 systemd-logind[1452]: Removed session 11. Apr 21 09:59:27.569051 sshd[5779]: Accepted publickey for core from 50.85.169.122 port 43678 ssh2: RSA SHA256:H2GDHYMb+1VDhh8fYRULGIeGI6zEpuvWNbrKKWv7l+g Apr 21 09:59:27.570954 sshd[5779]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 09:59:27.578718 systemd-logind[1452]: New session 12 of user core. Apr 21 09:59:27.585053 systemd[1]: Started session-12.scope - Session 12 of User core. Apr 21 09:59:27.804159 sshd[5779]: pam_unix(sshd:session): session closed for user core Apr 21 09:59:27.811732 systemd-logind[1452]: Session 12 logged out. Waiting for processes to exit. Apr 21 09:59:27.812047 systemd[1]: sshd@11-178.104.221.144:22-50.85.169.122:43678.service: Deactivated successfully. Apr 21 09:59:27.815646 systemd[1]: session-12.scope: Deactivated successfully. Apr 21 09:59:27.819384 systemd-logind[1452]: Removed session 12. Apr 21 09:59:27.838722 systemd[1]: Started sshd@12-178.104.221.144:22-50.85.169.122:43682.service - OpenSSH per-connection server daemon (50.85.169.122:43682). Apr 21 09:59:27.963340 sshd[5790]: Accepted publickey for core from 50.85.169.122 port 43682 ssh2: RSA SHA256:H2GDHYMb+1VDhh8fYRULGIeGI6zEpuvWNbrKKWv7l+g Apr 21 09:59:27.966667 sshd[5790]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 09:59:27.971711 systemd-logind[1452]: New session 13 of user core. Apr 21 09:59:27.975000 systemd[1]: Started session-13.scope - Session 13 of User core. Apr 21 09:59:28.155330 sshd[5790]: pam_unix(sshd:session): session closed for user core Apr 21 09:59:28.161313 systemd[1]: sshd@12-178.104.221.144:22-50.85.169.122:43682.service: Deactivated successfully. Apr 21 09:59:28.165189 systemd[1]: session-13.scope: Deactivated successfully. Apr 21 09:59:28.165967 systemd-logind[1452]: Session 13 logged out. Waiting for processes to exit. Apr 21 09:59:28.167448 systemd-logind[1452]: Removed session 13. Apr 21 09:59:33.187155 systemd[1]: Started sshd@13-178.104.221.144:22-50.85.169.122:37888.service - OpenSSH per-connection server daemon (50.85.169.122:37888). Apr 21 09:59:33.305859 sshd[5808]: Accepted publickey for core from 50.85.169.122 port 37888 ssh2: RSA SHA256:H2GDHYMb+1VDhh8fYRULGIeGI6zEpuvWNbrKKWv7l+g Apr 21 09:59:33.307023 sshd[5808]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 09:59:33.314022 systemd-logind[1452]: New session 14 of user core. Apr 21 09:59:33.319064 systemd[1]: Started session-14.scope - Session 14 of User core. Apr 21 09:59:33.488259 sshd[5808]: pam_unix(sshd:session): session closed for user core Apr 21 09:59:33.495528 systemd[1]: sshd@13-178.104.221.144:22-50.85.169.122:37888.service: Deactivated successfully. Apr 21 09:59:33.498991 systemd[1]: session-14.scope: Deactivated successfully. Apr 21 09:59:33.499909 systemd-logind[1452]: Session 14 logged out. Waiting for processes to exit. Apr 21 09:59:33.501438 systemd-logind[1452]: Removed session 14. Apr 21 09:59:33.524369 systemd[1]: Started sshd@14-178.104.221.144:22-50.85.169.122:37896.service - OpenSSH per-connection server daemon (50.85.169.122:37896). Apr 21 09:59:33.647382 sshd[5821]: Accepted publickey for core from 50.85.169.122 port 37896 ssh2: RSA SHA256:H2GDHYMb+1VDhh8fYRULGIeGI6zEpuvWNbrKKWv7l+g Apr 21 09:59:33.649388 sshd[5821]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 09:59:33.654744 systemd-logind[1452]: New session 15 of user core. Apr 21 09:59:33.657032 systemd[1]: Started session-15.scope - Session 15 of User core. Apr 21 09:59:34.009075 sshd[5821]: pam_unix(sshd:session): session closed for user core Apr 21 09:59:34.015601 systemd[1]: sshd@14-178.104.221.144:22-50.85.169.122:37896.service: Deactivated successfully. Apr 21 09:59:34.019420 systemd[1]: session-15.scope: Deactivated successfully. Apr 21 09:59:34.022015 systemd-logind[1452]: Session 15 logged out. Waiting for processes to exit. Apr 21 09:59:34.023401 systemd-logind[1452]: Removed session 15. Apr 21 09:59:34.046595 systemd[1]: Started sshd@15-178.104.221.144:22-50.85.169.122:37902.service - OpenSSH per-connection server daemon (50.85.169.122:37902). Apr 21 09:59:34.170980 sshd[5852]: Accepted publickey for core from 50.85.169.122 port 37902 ssh2: RSA SHA256:H2GDHYMb+1VDhh8fYRULGIeGI6zEpuvWNbrKKWv7l+g Apr 21 09:59:34.173239 sshd[5852]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 09:59:34.178586 systemd-logind[1452]: New session 16 of user core. Apr 21 09:59:34.186144 systemd[1]: Started session-16.scope - Session 16 of User core. Apr 21 09:59:35.002044 sshd[5852]: pam_unix(sshd:session): session closed for user core Apr 21 09:59:35.011466 systemd[1]: sshd@15-178.104.221.144:22-50.85.169.122:37902.service: Deactivated successfully. Apr 21 09:59:35.017559 systemd[1]: session-16.scope: Deactivated successfully. Apr 21 09:59:35.019902 systemd-logind[1452]: Session 16 logged out. Waiting for processes to exit. Apr 21 09:59:35.033675 systemd[1]: Started sshd@16-178.104.221.144:22-50.85.169.122:37914.service - OpenSSH per-connection server daemon (50.85.169.122:37914). Apr 21 09:59:35.035589 systemd-logind[1452]: Removed session 16. Apr 21 09:59:35.161901 sshd[5882]: Accepted publickey for core from 50.85.169.122 port 37914 ssh2: RSA SHA256:H2GDHYMb+1VDhh8fYRULGIeGI6zEpuvWNbrKKWv7l+g Apr 21 09:59:35.164021 sshd[5882]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 09:59:35.169590 systemd-logind[1452]: New session 17 of user core. Apr 21 09:59:35.176170 systemd[1]: Started session-17.scope - Session 17 of User core. Apr 21 09:59:35.505046 sshd[5882]: pam_unix(sshd:session): session closed for user core Apr 21 09:59:35.512320 systemd[1]: sshd@16-178.104.221.144:22-50.85.169.122:37914.service: Deactivated successfully. Apr 21 09:59:35.515603 systemd[1]: session-17.scope: Deactivated successfully. Apr 21 09:59:35.518782 systemd-logind[1452]: Session 17 logged out. Waiting for processes to exit. Apr 21 09:59:35.537090 systemd[1]: Started sshd@17-178.104.221.144:22-50.85.169.122:37924.service - OpenSSH per-connection server daemon (50.85.169.122:37924). Apr 21 09:59:35.538587 systemd-logind[1452]: Removed session 17. Apr 21 09:59:35.664221 sshd[5904]: Accepted publickey for core from 50.85.169.122 port 37924 ssh2: RSA SHA256:H2GDHYMb+1VDhh8fYRULGIeGI6zEpuvWNbrKKWv7l+g Apr 21 09:59:35.666490 sshd[5904]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 09:59:35.672568 systemd-logind[1452]: New session 18 of user core. Apr 21 09:59:35.678025 systemd[1]: Started session-18.scope - Session 18 of User core. Apr 21 09:59:35.847408 sshd[5904]: pam_unix(sshd:session): session closed for user core Apr 21 09:59:35.853764 systemd-logind[1452]: Session 18 logged out. Waiting for processes to exit. Apr 21 09:59:35.854069 systemd[1]: sshd@17-178.104.221.144:22-50.85.169.122:37924.service: Deactivated successfully. Apr 21 09:59:35.856549 systemd[1]: session-18.scope: Deactivated successfully. Apr 21 09:59:35.858033 systemd-logind[1452]: Removed session 18. Apr 21 09:59:40.882990 systemd[1]: Started sshd@18-178.104.221.144:22-50.85.169.122:47394.service - OpenSSH per-connection server daemon (50.85.169.122:47394). Apr 21 09:59:40.994868 sshd[5919]: Accepted publickey for core from 50.85.169.122 port 47394 ssh2: RSA SHA256:H2GDHYMb+1VDhh8fYRULGIeGI6zEpuvWNbrKKWv7l+g Apr 21 09:59:40.996063 sshd[5919]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 09:59:41.005135 systemd-logind[1452]: New session 19 of user core. Apr 21 09:59:41.009107 systemd[1]: Started session-19.scope - Session 19 of User core. Apr 21 09:59:41.185033 sshd[5919]: pam_unix(sshd:session): session closed for user core Apr 21 09:59:41.191912 systemd[1]: sshd@18-178.104.221.144:22-50.85.169.122:47394.service: Deactivated successfully. Apr 21 09:59:41.196199 systemd[1]: session-19.scope: Deactivated successfully. Apr 21 09:59:41.198208 systemd-logind[1452]: Session 19 logged out. Waiting for processes to exit. Apr 21 09:59:41.199257 systemd-logind[1452]: Removed session 19. Apr 21 09:59:46.223236 systemd[1]: Started sshd@19-178.104.221.144:22-50.85.169.122:47406.service - OpenSSH per-connection server daemon (50.85.169.122:47406). Apr 21 09:59:46.351123 sshd[5973]: Accepted publickey for core from 50.85.169.122 port 47406 ssh2: RSA SHA256:H2GDHYMb+1VDhh8fYRULGIeGI6zEpuvWNbrKKWv7l+g Apr 21 09:59:46.353812 sshd[5973]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 09:59:46.360701 systemd-logind[1452]: New session 20 of user core. Apr 21 09:59:46.367139 systemd[1]: Started session-20.scope - Session 20 of User core. Apr 21 09:59:46.538693 sshd[5973]: pam_unix(sshd:session): session closed for user core Apr 21 09:59:46.546067 systemd-logind[1452]: Session 20 logged out. Waiting for processes to exit. Apr 21 09:59:46.546747 systemd[1]: sshd@19-178.104.221.144:22-50.85.169.122:47406.service: Deactivated successfully. Apr 21 09:59:46.548668 systemd[1]: session-20.scope: Deactivated successfully. Apr 21 09:59:46.550795 systemd-logind[1452]: Removed session 20. Apr 21 09:59:49.526928 kubelet[2514]: I0421 09:59:49.526428 2514 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 21 10:00:04.032669 systemd[1]: run-containerd-runc-k8s.io-7fa507c115c5f570627d21dc13cf478197540d3732a71185815b79b872d70e75-runc.jiNxYV.mount: Deactivated successfully. Apr 21 10:00:17.818855 kubelet[2514]: E0421 10:00:17.818790 2514 controller.go:195] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:55452->10.0.0.2:2379: read: connection timed out" Apr 21 10:00:17.824776 systemd[1]: cri-containerd-a19c6744f3ab42807719745a23f89e405293d30880d18161d5b495d678175fe9.scope: Deactivated successfully. Apr 21 10:00:17.825653 systemd[1]: cri-containerd-a19c6744f3ab42807719745a23f89e405293d30880d18161d5b495d678175fe9.scope: Consumed 3.097s CPU time, 16.3M memory peak, 0B memory swap peak. Apr 21 10:00:17.851541 containerd[1482]: time="2026-04-21T10:00:17.851207973Z" level=info msg="shim disconnected" id=a19c6744f3ab42807719745a23f89e405293d30880d18161d5b495d678175fe9 namespace=k8s.io Apr 21 10:00:17.851541 containerd[1482]: time="2026-04-21T10:00:17.851296330Z" level=warning msg="cleaning up after shim disconnected" id=a19c6744f3ab42807719745a23f89e405293d30880d18161d5b495d678175fe9 namespace=k8s.io Apr 21 10:00:17.851541 containerd[1482]: time="2026-04-21T10:00:17.851305650Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 21 10:00:17.852920 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a19c6744f3ab42807719745a23f89e405293d30880d18161d5b495d678175fe9-rootfs.mount: Deactivated successfully. Apr 21 10:00:17.980376 systemd[1]: cri-containerd-8337d669e0737cbd41937f55c74fe1c4eef4dcf29270ff9e9e1bc16b5af05357.scope: Deactivated successfully. Apr 21 10:00:17.981096 systemd[1]: cri-containerd-8337d669e0737cbd41937f55c74fe1c4eef4dcf29270ff9e9e1bc16b5af05357.scope: Consumed 19.253s CPU time. Apr 21 10:00:18.002631 containerd[1482]: time="2026-04-21T10:00:18.002569517Z" level=info msg="shim disconnected" id=8337d669e0737cbd41937f55c74fe1c4eef4dcf29270ff9e9e1bc16b5af05357 namespace=k8s.io Apr 21 10:00:18.002785 containerd[1482]: time="2026-04-21T10:00:18.002623635Z" level=warning msg="cleaning up after shim disconnected" id=8337d669e0737cbd41937f55c74fe1c4eef4dcf29270ff9e9e1bc16b5af05357 namespace=k8s.io Apr 21 10:00:18.002785 containerd[1482]: time="2026-04-21T10:00:18.002679554Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 21 10:00:18.003348 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8337d669e0737cbd41937f55c74fe1c4eef4dcf29270ff9e9e1bc16b5af05357-rootfs.mount: Deactivated successfully. Apr 21 10:00:18.791088 kubelet[2514]: I0421 10:00:18.791042 2514 scope.go:117] "RemoveContainer" containerID="a19c6744f3ab42807719745a23f89e405293d30880d18161d5b495d678175fe9" Apr 21 10:00:18.791665 kubelet[2514]: I0421 10:00:18.791410 2514 scope.go:117] "RemoveContainer" containerID="8337d669e0737cbd41937f55c74fe1c4eef4dcf29270ff9e9e1bc16b5af05357" Apr 21 10:00:18.794109 containerd[1482]: time="2026-04-21T10:00:18.794068339Z" level=info msg="CreateContainer within sandbox \"d4a2e55da05c6a31ccd30c8c90d83825802ddd0f3177c76a9e0e300941a069ec\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Apr 21 10:00:18.794871 containerd[1482]: time="2026-04-21T10:00:18.794628083Z" level=info msg="CreateContainer within sandbox \"10d720dd9bf5bd7561fdcad1e947a9143cd7eef9a47cf2e0823d39847e98f869\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Apr 21 10:00:18.812101 containerd[1482]: time="2026-04-21T10:00:18.812059583Z" level=info msg="CreateContainer within sandbox \"10d720dd9bf5bd7561fdcad1e947a9143cd7eef9a47cf2e0823d39847e98f869\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"84aa21675b5c3e84bbdeeadb04500bc7cd7beb3dc61d70d995890cd31fc2e386\"" Apr 21 10:00:18.815040 containerd[1482]: time="2026-04-21T10:00:18.815001818Z" level=info msg="StartContainer for \"84aa21675b5c3e84bbdeeadb04500bc7cd7beb3dc61d70d995890cd31fc2e386\"" Apr 21 10:00:18.819687 containerd[1482]: time="2026-04-21T10:00:18.819640365Z" level=info msg="CreateContainer within sandbox \"d4a2e55da05c6a31ccd30c8c90d83825802ddd0f3177c76a9e0e300941a069ec\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"21b67056194d2816f88bedaecb1d1a19982ffad0342aea59ef1538e06305db63\"" Apr 21 10:00:18.820346 containerd[1482]: time="2026-04-21T10:00:18.820313105Z" level=info msg="StartContainer for \"21b67056194d2816f88bedaecb1d1a19982ffad0342aea59ef1538e06305db63\"" Apr 21 10:00:18.845993 systemd[1]: Started cri-containerd-84aa21675b5c3e84bbdeeadb04500bc7cd7beb3dc61d70d995890cd31fc2e386.scope - libcontainer container 84aa21675b5c3e84bbdeeadb04500bc7cd7beb3dc61d70d995890cd31fc2e386. Apr 21 10:00:18.874076 systemd[1]: Started cri-containerd-21b67056194d2816f88bedaecb1d1a19982ffad0342aea59ef1538e06305db63.scope - libcontainer container 21b67056194d2816f88bedaecb1d1a19982ffad0342aea59ef1538e06305db63. Apr 21 10:00:18.903438 systemd[1]: cri-containerd-26101973a8719898552e645b4b9870f1a928ca90414183785b76b89090639e3d.scope: Deactivated successfully. Apr 21 10:00:18.903694 systemd[1]: cri-containerd-26101973a8719898552e645b4b9870f1a928ca90414183785b76b89090639e3d.scope: Consumed 3.463s CPU time, 20.2M memory peak, 0B memory swap peak. Apr 21 10:00:18.930623 containerd[1482]: time="2026-04-21T10:00:18.930278347Z" level=info msg="StartContainer for \"84aa21675b5c3e84bbdeeadb04500bc7cd7beb3dc61d70d995890cd31fc2e386\" returns successfully" Apr 21 10:00:18.964847 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-26101973a8719898552e645b4b9870f1a928ca90414183785b76b89090639e3d-rootfs.mount: Deactivated successfully. Apr 21 10:00:18.965527 containerd[1482]: time="2026-04-21T10:00:18.965285381Z" level=info msg="shim disconnected" id=26101973a8719898552e645b4b9870f1a928ca90414183785b76b89090639e3d namespace=k8s.io Apr 21 10:00:18.965527 containerd[1482]: time="2026-04-21T10:00:18.965455416Z" level=warning msg="cleaning up after shim disconnected" id=26101973a8719898552e645b4b9870f1a928ca90414183785b76b89090639e3d namespace=k8s.io Apr 21 10:00:18.965527 containerd[1482]: time="2026-04-21T10:00:18.965464656Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 21 10:00:18.971253 containerd[1482]: time="2026-04-21T10:00:18.971173332Z" level=info msg="StartContainer for \"21b67056194d2816f88bedaecb1d1a19982ffad0342aea59ef1538e06305db63\" returns successfully" Apr 21 10:00:19.490250 kubelet[2514]: E0421 10:00:19.480149 2514 event.go:359] "Server rejected event (will not retry!)" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:55230->10.0.0.2:2379: read: connection timed out" event="&Event{ObjectMeta:{kube-apiserver-ci-4081-3-7-d-6a70a4c656.18a856e919286b22 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-ci-4081-3-7-d-6a70a4c656,UID:9a7eec9103a0a9bf8ff79a225feff015,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Readiness probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:ci-4081-3-7-d-6a70a4c656,},FirstTimestamp:2026-04-21 10:00:12.147108642 +0000 UTC m=+117.020858523,LastTimestamp:2026-04-21 10:00:12.147108642 +0000 UTC m=+117.020858523,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-7-d-6a70a4c656,}" Apr 21 10:00:19.793933 kubelet[2514]: I0421 10:00:19.793291 2514 scope.go:117] "RemoveContainer" containerID="26101973a8719898552e645b4b9870f1a928ca90414183785b76b89090639e3d" Apr 21 10:00:19.797807 containerd[1482]: time="2026-04-21T10:00:19.797773742Z" level=info msg="CreateContainer within sandbox \"a29e93be4b815d45c37047ba51f6256913683506179a73534fda5d9a377cd22d\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Apr 21 10:00:19.817769 containerd[1482]: time="2026-04-21T10:00:19.817630577Z" level=info msg="CreateContainer within sandbox \"a29e93be4b815d45c37047ba51f6256913683506179a73534fda5d9a377cd22d\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"e0be901425a8e79c041507f602de6d458ddde57e4827c90527b64340a0354127\"" Apr 21 10:00:19.819516 containerd[1482]: time="2026-04-21T10:00:19.818276879Z" level=info msg="StartContainer for \"e0be901425a8e79c041507f602de6d458ddde57e4827c90527b64340a0354127\"" Apr 21 10:00:19.853211 systemd[1]: Started cri-containerd-e0be901425a8e79c041507f602de6d458ddde57e4827c90527b64340a0354127.scope - libcontainer container e0be901425a8e79c041507f602de6d458ddde57e4827c90527b64340a0354127. Apr 21 10:00:19.910131 containerd[1482]: time="2026-04-21T10:00:19.910067369Z" level=info msg="StartContainer for \"e0be901425a8e79c041507f602de6d458ddde57e4827c90527b64340a0354127\" returns successfully"