Apr 17 23:27:37.899125 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Apr 17 23:27:37.899155 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Fri Apr 17 22:13:49 -00 2026 Apr 17 23:27:37.899167 kernel: KASLR enabled Apr 17 23:27:37.899175 kernel: efi: EFI v2.7 by Ubuntu distribution of EDK II Apr 17 23:27:37.899183 kernel: efi: SMBIOS 3.0=0x139ed0000 MEMATTR=0x138595418 ACPI 2.0=0x136760018 RNG=0x13676e918 MEMRESERVE=0x136b43d18 Apr 17 23:27:37.899190 kernel: random: crng init done Apr 17 23:27:37.899200 kernel: ACPI: Early table checksum verification disabled Apr 17 23:27:37.899208 kernel: ACPI: RSDP 0x0000000136760018 000024 (v02 BOCHS ) Apr 17 23:27:37.899216 kernel: ACPI: XSDT 0x000000013676FE98 00006C (v01 BOCHS BXPC 00000001 01000013) Apr 17 23:27:37.899226 kernel: ACPI: FACP 0x000000013676FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Apr 17 23:27:37.899234 kernel: ACPI: DSDT 0x0000000136767518 001468 (v02 BOCHS BXPC 00000001 BXPC 00000001) Apr 17 23:27:37.899242 kernel: ACPI: APIC 0x000000013676FC18 000108 (v04 BOCHS BXPC 00000001 BXPC 00000001) Apr 17 23:27:37.899250 kernel: ACPI: PPTT 0x000000013676FD98 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Apr 17 23:27:37.899259 kernel: ACPI: GTDT 0x000000013676D898 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Apr 17 23:27:37.899269 kernel: ACPI: MCFG 0x000000013676FF98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 17 23:27:37.899280 kernel: ACPI: SPCR 0x000000013676E818 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Apr 17 23:27:37.899288 kernel: ACPI: DBG2 0x000000013676E898 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Apr 17 23:27:37.899297 kernel: ACPI: IORT 0x000000013676E418 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Apr 17 23:27:37.899306 kernel: ACPI: BGRT 0x000000013676E798 000038 (v01 INTEL EDK2 00000002 01000013) Apr 17 23:27:37.899314 kernel: ACPI: SPCR: console: pl011,mmio32,0x9000000,9600 Apr 17 23:27:37.899323 kernel: NUMA: Failed to initialise from firmware Apr 17 23:27:37.899331 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x0000000139ffffff] Apr 17 23:27:37.899339 kernel: NUMA: NODE_DATA [mem 0x139671800-0x139676fff] Apr 17 23:27:37.899348 kernel: Zone ranges: Apr 17 23:27:37.899357 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Apr 17 23:27:37.899367 kernel: DMA32 empty Apr 17 23:27:37.899376 kernel: Normal [mem 0x0000000100000000-0x0000000139ffffff] Apr 17 23:27:37.899385 kernel: Movable zone start for each node Apr 17 23:27:37.899393 kernel: Early memory node ranges Apr 17 23:27:37.899401 kernel: node 0: [mem 0x0000000040000000-0x000000013676ffff] Apr 17 23:27:37.899410 kernel: node 0: [mem 0x0000000136770000-0x0000000136b3ffff] Apr 17 23:27:37.899418 kernel: node 0: [mem 0x0000000136b40000-0x0000000139e1ffff] Apr 17 23:27:37.899426 kernel: node 0: [mem 0x0000000139e20000-0x0000000139eaffff] Apr 17 23:27:37.899434 kernel: node 0: [mem 0x0000000139eb0000-0x0000000139ebffff] Apr 17 23:27:37.899442 kernel: node 0: [mem 0x0000000139ec0000-0x0000000139fdffff] Apr 17 23:27:37.900755 kernel: node 0: [mem 0x0000000139fe0000-0x0000000139ffffff] Apr 17 23:27:37.900772 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x0000000139ffffff] Apr 17 23:27:37.900787 kernel: On node 0, zone Normal: 24576 pages in unavailable ranges Apr 17 23:27:37.900796 kernel: psci: probing for conduit method from ACPI. Apr 17 23:27:37.900805 kernel: psci: PSCIv1.1 detected in firmware. Apr 17 23:27:37.900818 kernel: psci: Using standard PSCI v0.2 function IDs Apr 17 23:27:37.900827 kernel: psci: Trusted OS migration not required Apr 17 23:27:37.900837 kernel: psci: SMC Calling Convention v1.1 Apr 17 23:27:37.900848 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Apr 17 23:27:37.900858 kernel: percpu: Embedded 30 pages/cpu s85736 r8192 d28952 u122880 Apr 17 23:27:37.900867 kernel: pcpu-alloc: s85736 r8192 d28952 u122880 alloc=30*4096 Apr 17 23:27:37.900876 kernel: pcpu-alloc: [0] 0 [0] 1 Apr 17 23:27:37.900885 kernel: Detected PIPT I-cache on CPU0 Apr 17 23:27:37.900894 kernel: CPU features: detected: GIC system register CPU interface Apr 17 23:27:37.900917 kernel: CPU features: detected: Hardware dirty bit management Apr 17 23:27:37.900926 kernel: CPU features: detected: Spectre-v4 Apr 17 23:27:37.900933 kernel: CPU features: detected: Spectre-BHB Apr 17 23:27:37.900940 kernel: CPU features: kernel page table isolation forced ON by KASLR Apr 17 23:27:37.900950 kernel: CPU features: detected: Kernel page table isolation (KPTI) Apr 17 23:27:37.900957 kernel: CPU features: detected: ARM erratum 1418040 Apr 17 23:27:37.900964 kernel: CPU features: detected: SSBS not fully self-synchronizing Apr 17 23:27:37.900972 kernel: alternatives: applying boot alternatives Apr 17 23:27:37.900981 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=f77c53ef012912081447488e689e924a7faa1d92b63ab5dfeba9709e9511e349 Apr 17 23:27:37.901045 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Apr 17 23:27:37.901053 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 17 23:27:37.901060 kernel: Fallback order for Node 0: 0 Apr 17 23:27:37.901067 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1008000 Apr 17 23:27:37.901074 kernel: Policy zone: Normal Apr 17 23:27:37.901081 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 17 23:27:37.901090 kernel: software IO TLB: area num 2. Apr 17 23:27:37.901098 kernel: software IO TLB: mapped [mem 0x00000000fbfff000-0x00000000fffff000] (64MB) Apr 17 23:27:37.901105 kernel: Memory: 3882824K/4096000K available (10304K kernel code, 2180K rwdata, 8116K rodata, 39424K init, 897K bss, 213176K reserved, 0K cma-reserved) Apr 17 23:27:37.901112 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Apr 17 23:27:37.901119 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 17 23:27:37.901127 kernel: rcu: RCU event tracing is enabled. Apr 17 23:27:37.901134 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Apr 17 23:27:37.901142 kernel: Trampoline variant of Tasks RCU enabled. Apr 17 23:27:37.901149 kernel: Tracing variant of Tasks RCU enabled. Apr 17 23:27:37.901156 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 17 23:27:37.901163 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Apr 17 23:27:37.901170 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Apr 17 23:27:37.901178 kernel: GICv3: 256 SPIs implemented Apr 17 23:27:37.901185 kernel: GICv3: 0 Extended SPIs implemented Apr 17 23:27:37.901192 kernel: Root IRQ handler: gic_handle_irq Apr 17 23:27:37.901199 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Apr 17 23:27:37.901206 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Apr 17 23:27:37.901213 kernel: ITS [mem 0x08080000-0x0809ffff] Apr 17 23:27:37.901220 kernel: ITS@0x0000000008080000: allocated 8192 Devices @1000c0000 (indirect, esz 8, psz 64K, shr 1) Apr 17 23:27:37.901227 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @1000d0000 (flat, esz 8, psz 64K, shr 1) Apr 17 23:27:37.901234 kernel: GICv3: using LPI property table @0x00000001000e0000 Apr 17 23:27:37.901241 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000001000f0000 Apr 17 23:27:37.901248 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 17 23:27:37.901257 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Apr 17 23:27:37.901264 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Apr 17 23:27:37.901272 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Apr 17 23:27:37.901279 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Apr 17 23:27:37.901286 kernel: Console: colour dummy device 80x25 Apr 17 23:27:37.901294 kernel: ACPI: Core revision 20230628 Apr 17 23:27:37.901301 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Apr 17 23:27:37.901308 kernel: pid_max: default: 32768 minimum: 301 Apr 17 23:27:37.901315 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Apr 17 23:27:37.901323 kernel: landlock: Up and running. Apr 17 23:27:37.901332 kernel: SELinux: Initializing. Apr 17 23:27:37.901339 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 17 23:27:37.901347 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 17 23:27:37.901355 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 17 23:27:37.901362 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 17 23:27:37.901369 kernel: rcu: Hierarchical SRCU implementation. Apr 17 23:27:37.901377 kernel: rcu: Max phase no-delay instances is 400. Apr 17 23:27:37.901384 kernel: Platform MSI: ITS@0x8080000 domain created Apr 17 23:27:37.901391 kernel: PCI/MSI: ITS@0x8080000 domain created Apr 17 23:27:37.901400 kernel: Remapping and enabling EFI services. Apr 17 23:27:37.901408 kernel: smp: Bringing up secondary CPUs ... Apr 17 23:27:37.901416 kernel: Detected PIPT I-cache on CPU1 Apr 17 23:27:37.901424 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Apr 17 23:27:37.901431 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000100100000 Apr 17 23:27:37.901438 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Apr 17 23:27:37.901446 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Apr 17 23:27:37.903508 kernel: smp: Brought up 1 node, 2 CPUs Apr 17 23:27:37.903531 kernel: SMP: Total of 2 processors activated. Apr 17 23:27:37.903540 kernel: CPU features: detected: 32-bit EL0 Support Apr 17 23:27:37.903554 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Apr 17 23:27:37.903562 kernel: CPU features: detected: Common not Private translations Apr 17 23:27:37.903575 kernel: CPU features: detected: CRC32 instructions Apr 17 23:27:37.903584 kernel: CPU features: detected: Enhanced Virtualization Traps Apr 17 23:27:37.903591 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Apr 17 23:27:37.903599 kernel: CPU features: detected: LSE atomic instructions Apr 17 23:27:37.903606 kernel: CPU features: detected: Privileged Access Never Apr 17 23:27:37.903614 kernel: CPU features: detected: RAS Extension Support Apr 17 23:27:37.903624 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Apr 17 23:27:37.903632 kernel: CPU: All CPU(s) started at EL1 Apr 17 23:27:37.903640 kernel: alternatives: applying system-wide alternatives Apr 17 23:27:37.903647 kernel: devtmpfs: initialized Apr 17 23:27:37.903656 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 17 23:27:37.903663 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Apr 17 23:27:37.903671 kernel: pinctrl core: initialized pinctrl subsystem Apr 17 23:27:37.903678 kernel: SMBIOS 3.0.0 present. Apr 17 23:27:37.903687 kernel: DMI: Hetzner vServer/KVM Virtual Machine, BIOS 20171111 11/11/2017 Apr 17 23:27:37.903695 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 17 23:27:37.903702 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Apr 17 23:27:37.903710 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Apr 17 23:27:37.903718 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Apr 17 23:27:37.903726 kernel: audit: initializing netlink subsys (disabled) Apr 17 23:27:37.903734 kernel: audit: type=2000 audit(0.019:1): state=initialized audit_enabled=0 res=1 Apr 17 23:27:37.903742 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 17 23:27:37.903749 kernel: cpuidle: using governor menu Apr 17 23:27:37.903759 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Apr 17 23:27:37.903767 kernel: ASID allocator initialised with 32768 entries Apr 17 23:27:37.903774 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 17 23:27:37.903782 kernel: Serial: AMBA PL011 UART driver Apr 17 23:27:37.903790 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Apr 17 23:27:37.903798 kernel: Modules: 0 pages in range for non-PLT usage Apr 17 23:27:37.903806 kernel: Modules: 509008 pages in range for PLT usage Apr 17 23:27:37.903813 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 17 23:27:37.903822 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Apr 17 23:27:37.903832 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Apr 17 23:27:37.903839 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Apr 17 23:27:37.903847 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 17 23:27:37.903855 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Apr 17 23:27:37.903863 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Apr 17 23:27:37.903870 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Apr 17 23:27:37.903878 kernel: ACPI: Added _OSI(Module Device) Apr 17 23:27:37.903886 kernel: ACPI: Added _OSI(Processor Device) Apr 17 23:27:37.903894 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 17 23:27:37.903919 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Apr 17 23:27:37.903928 kernel: ACPI: Interpreter enabled Apr 17 23:27:37.903936 kernel: ACPI: Using GIC for interrupt routing Apr 17 23:27:37.903943 kernel: ACPI: MCFG table detected, 1 entries Apr 17 23:27:37.903951 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Apr 17 23:27:37.903958 kernel: printk: console [ttyAMA0] enabled Apr 17 23:27:37.903966 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Apr 17 23:27:37.904148 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Apr 17 23:27:37.904233 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Apr 17 23:27:37.904304 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Apr 17 23:27:37.904371 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Apr 17 23:27:37.904437 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Apr 17 23:27:37.905496 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Apr 17 23:27:37.905517 kernel: PCI host bridge to bus 0000:00 Apr 17 23:27:37.905650 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Apr 17 23:27:37.905725 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Apr 17 23:27:37.905787 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Apr 17 23:27:37.905852 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Apr 17 23:27:37.905966 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Apr 17 23:27:37.906052 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x038000 Apr 17 23:27:37.906125 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x11289000-0x11289fff] Apr 17 23:27:37.906195 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000600000-0x8000603fff 64bit pref] Apr 17 23:27:37.906275 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Apr 17 23:27:37.906345 kernel: pci 0000:00:02.0: reg 0x10: [mem 0x11288000-0x11288fff] Apr 17 23:27:37.906429 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Apr 17 23:27:37.907556 kernel: pci 0000:00:02.1: reg 0x10: [mem 0x11287000-0x11287fff] Apr 17 23:27:37.907661 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Apr 17 23:27:37.907732 kernel: pci 0000:00:02.2: reg 0x10: [mem 0x11286000-0x11286fff] Apr 17 23:27:37.907840 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Apr 17 23:27:37.907967 kernel: pci 0000:00:02.3: reg 0x10: [mem 0x11285000-0x11285fff] Apr 17 23:27:37.908055 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Apr 17 23:27:37.908122 kernel: pci 0000:00:02.4: reg 0x10: [mem 0x11284000-0x11284fff] Apr 17 23:27:37.908194 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Apr 17 23:27:37.908261 kernel: pci 0000:00:02.5: reg 0x10: [mem 0x11283000-0x11283fff] Apr 17 23:27:37.908339 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Apr 17 23:27:37.908405 kernel: pci 0000:00:02.6: reg 0x10: [mem 0x11282000-0x11282fff] Apr 17 23:27:37.908511 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Apr 17 23:27:37.908597 kernel: pci 0000:00:02.7: reg 0x10: [mem 0x11281000-0x11281fff] Apr 17 23:27:37.908687 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 Apr 17 23:27:37.908768 kernel: pci 0000:00:03.0: reg 0x10: [mem 0x11280000-0x11280fff] Apr 17 23:27:37.908866 kernel: pci 0000:00:04.0: [1b36:0002] type 00 class 0x070002 Apr 17 23:27:37.908960 kernel: pci 0000:00:04.0: reg 0x10: [io 0x0000-0x0007] Apr 17 23:27:37.909043 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 Apr 17 23:27:37.909113 kernel: pci 0000:01:00.0: reg 0x14: [mem 0x11000000-0x11000fff] Apr 17 23:27:37.909185 kernel: pci 0000:01:00.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Apr 17 23:27:37.909257 kernel: pci 0000:01:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Apr 17 23:27:37.909334 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 Apr 17 23:27:37.909412 kernel: pci 0000:02:00.0: reg 0x10: [mem 0x10e00000-0x10e03fff 64bit] Apr 17 23:27:37.909741 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 Apr 17 23:27:37.910099 kernel: pci 0000:03:00.0: reg 0x14: [mem 0x10c00000-0x10c00fff] Apr 17 23:27:37.910172 kernel: pci 0000:03:00.0: reg 0x20: [mem 0x8000100000-0x8000103fff 64bit pref] Apr 17 23:27:37.910265 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 Apr 17 23:27:37.910333 kernel: pci 0000:04:00.0: reg 0x20: [mem 0x8000200000-0x8000203fff 64bit pref] Apr 17 23:27:37.910415 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 Apr 17 23:27:37.910543 kernel: pci 0000:05:00.0: reg 0x14: [mem 0x10800000-0x10800fff] Apr 17 23:27:37.910613 kernel: pci 0000:05:00.0: reg 0x20: [mem 0x8000300000-0x8000303fff 64bit pref] Apr 17 23:27:37.910687 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 Apr 17 23:27:37.910755 kernel: pci 0000:06:00.0: reg 0x14: [mem 0x10600000-0x10600fff] Apr 17 23:27:37.910821 kernel: pci 0000:06:00.0: reg 0x20: [mem 0x8000400000-0x8000403fff 64bit pref] Apr 17 23:27:37.910909 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 Apr 17 23:27:37.910986 kernel: pci 0000:07:00.0: reg 0x14: [mem 0x10400000-0x10400fff] Apr 17 23:27:37.911053 kernel: pci 0000:07:00.0: reg 0x20: [mem 0x8000500000-0x8000503fff 64bit pref] Apr 17 23:27:37.911129 kernel: pci 0000:07:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Apr 17 23:27:37.911217 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Apr 17 23:27:37.911289 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 01] add_size 100000 add_align 100000 Apr 17 23:27:37.911360 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff] to [bus 01] add_size 100000 add_align 100000 Apr 17 23:27:37.911434 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Apr 17 23:27:37.911516 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Apr 17 23:27:37.911587 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x001fffff] to [bus 02] add_size 100000 add_align 100000 Apr 17 23:27:37.911660 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Apr 17 23:27:37.911730 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 03] add_size 100000 add_align 100000 Apr 17 23:27:37.911801 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff] to [bus 03] add_size 100000 add_align 100000 Apr 17 23:27:37.911874 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Apr 17 23:27:37.911963 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 04] add_size 100000 add_align 100000 Apr 17 23:27:37.912041 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 Apr 17 23:27:37.912117 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Apr 17 23:27:37.912188 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 05] add_size 100000 add_align 100000 Apr 17 23:27:37.912260 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff] to [bus 05] add_size 100000 add_align 100000 Apr 17 23:27:37.912331 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Apr 17 23:27:37.912405 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 06] add_size 100000 add_align 100000 Apr 17 23:27:37.912491 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff] to [bus 06] add_size 100000 add_align 100000 Apr 17 23:27:37.912694 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Apr 17 23:27:37.912772 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 07] add_size 100000 add_align 100000 Apr 17 23:27:37.912844 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff] to [bus 07] add_size 100000 add_align 100000 Apr 17 23:27:37.912937 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Apr 17 23:27:37.913015 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 08] add_size 200000 add_align 100000 Apr 17 23:27:37.913089 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff] to [bus 08] add_size 200000 add_align 100000 Apr 17 23:27:37.913162 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Apr 17 23:27:37.913232 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 09] add_size 200000 add_align 100000 Apr 17 23:27:37.913302 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 09] add_size 200000 add_align 100000 Apr 17 23:27:37.913375 kernel: pci 0000:00:02.0: BAR 14: assigned [mem 0x10000000-0x101fffff] Apr 17 23:27:37.913516 kernel: pci 0000:00:02.0: BAR 15: assigned [mem 0x8000000000-0x80001fffff 64bit pref] Apr 17 23:27:37.913622 kernel: pci 0000:00:02.1: BAR 14: assigned [mem 0x10200000-0x103fffff] Apr 17 23:27:37.913700 kernel: pci 0000:00:02.1: BAR 15: assigned [mem 0x8000200000-0x80003fffff 64bit pref] Apr 17 23:27:37.913772 kernel: pci 0000:00:02.2: BAR 14: assigned [mem 0x10400000-0x105fffff] Apr 17 23:27:37.913847 kernel: pci 0000:00:02.2: BAR 15: assigned [mem 0x8000400000-0x80005fffff 64bit pref] Apr 17 23:27:37.913968 kernel: pci 0000:00:02.3: BAR 14: assigned [mem 0x10600000-0x107fffff] Apr 17 23:27:37.914046 kernel: pci 0000:00:02.3: BAR 15: assigned [mem 0x8000600000-0x80007fffff 64bit pref] Apr 17 23:27:37.914117 kernel: pci 0000:00:02.4: BAR 14: assigned [mem 0x10800000-0x109fffff] Apr 17 23:27:37.914186 kernel: pci 0000:00:02.4: BAR 15: assigned [mem 0x8000800000-0x80009fffff 64bit pref] Apr 17 23:27:37.914255 kernel: pci 0000:00:02.5: BAR 14: assigned [mem 0x10a00000-0x10bfffff] Apr 17 23:27:37.914321 kernel: pci 0000:00:02.5: BAR 15: assigned [mem 0x8000a00000-0x8000bfffff 64bit pref] Apr 17 23:27:37.914397 kernel: pci 0000:00:02.6: BAR 14: assigned [mem 0x10c00000-0x10dfffff] Apr 17 23:27:37.914548 kernel: pci 0000:00:02.6: BAR 15: assigned [mem 0x8000c00000-0x8000dfffff 64bit pref] Apr 17 23:27:37.914624 kernel: pci 0000:00:02.7: BAR 14: assigned [mem 0x10e00000-0x10ffffff] Apr 17 23:27:37.914688 kernel: pci 0000:00:02.7: BAR 15: assigned [mem 0x8000e00000-0x8000ffffff 64bit pref] Apr 17 23:27:37.914751 kernel: pci 0000:00:03.0: BAR 14: assigned [mem 0x11000000-0x111fffff] Apr 17 23:27:37.914814 kernel: pci 0000:00:03.0: BAR 15: assigned [mem 0x8001000000-0x80011fffff 64bit pref] Apr 17 23:27:37.914882 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8001200000-0x8001203fff 64bit pref] Apr 17 23:27:37.914970 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x11200000-0x11200fff] Apr 17 23:27:37.915037 kernel: pci 0000:00:02.0: BAR 0: assigned [mem 0x11201000-0x11201fff] Apr 17 23:27:37.915101 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] Apr 17 23:27:37.915164 kernel: pci 0000:00:02.1: BAR 0: assigned [mem 0x11202000-0x11202fff] Apr 17 23:27:37.915229 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] Apr 17 23:27:37.915293 kernel: pci 0000:00:02.2: BAR 0: assigned [mem 0x11203000-0x11203fff] Apr 17 23:27:37.915356 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] Apr 17 23:27:37.915420 kernel: pci 0000:00:02.3: BAR 0: assigned [mem 0x11204000-0x11204fff] Apr 17 23:27:37.915528 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] Apr 17 23:27:37.915595 kernel: pci 0000:00:02.4: BAR 0: assigned [mem 0x11205000-0x11205fff] Apr 17 23:27:37.915658 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] Apr 17 23:27:37.915722 kernel: pci 0000:00:02.5: BAR 0: assigned [mem 0x11206000-0x11206fff] Apr 17 23:27:37.915787 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] Apr 17 23:27:37.915851 kernel: pci 0000:00:02.6: BAR 0: assigned [mem 0x11207000-0x11207fff] Apr 17 23:27:37.915935 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] Apr 17 23:27:37.916011 kernel: pci 0000:00:02.7: BAR 0: assigned [mem 0x11208000-0x11208fff] Apr 17 23:27:37.916082 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] Apr 17 23:27:37.916152 kernel: pci 0000:00:03.0: BAR 0: assigned [mem 0x11209000-0x11209fff] Apr 17 23:27:37.916221 kernel: pci 0000:00:03.0: BAR 13: assigned [io 0x9000-0x9fff] Apr 17 23:27:37.916294 kernel: pci 0000:00:04.0: BAR 0: assigned [io 0xa000-0xa007] Apr 17 23:27:37.916370 kernel: pci 0000:01:00.0: BAR 6: assigned [mem 0x10000000-0x1007ffff pref] Apr 17 23:27:37.916444 kernel: pci 0000:01:00.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Apr 17 23:27:37.917655 kernel: pci 0000:01:00.0: BAR 1: assigned [mem 0x10080000-0x10080fff] Apr 17 23:27:37.917735 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Apr 17 23:27:37.917837 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Apr 17 23:27:37.917930 kernel: pci 0000:00:02.0: bridge window [mem 0x10000000-0x101fffff] Apr 17 23:27:37.918012 kernel: pci 0000:00:02.0: bridge window [mem 0x8000000000-0x80001fffff 64bit pref] Apr 17 23:27:37.918090 kernel: pci 0000:02:00.0: BAR 0: assigned [mem 0x10200000-0x10203fff 64bit] Apr 17 23:27:37.918165 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Apr 17 23:27:37.918234 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Apr 17 23:27:37.918302 kernel: pci 0000:00:02.1: bridge window [mem 0x10200000-0x103fffff] Apr 17 23:27:37.918369 kernel: pci 0000:00:02.1: bridge window [mem 0x8000200000-0x80003fffff 64bit pref] Apr 17 23:27:37.918445 kernel: pci 0000:03:00.0: BAR 4: assigned [mem 0x8000400000-0x8000403fff 64bit pref] Apr 17 23:27:37.919293 kernel: pci 0000:03:00.0: BAR 1: assigned [mem 0x10400000-0x10400fff] Apr 17 23:27:37.919366 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Apr 17 23:27:37.919434 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Apr 17 23:27:37.919600 kernel: pci 0000:00:02.2: bridge window [mem 0x10400000-0x105fffff] Apr 17 23:27:37.919669 kernel: pci 0000:00:02.2: bridge window [mem 0x8000400000-0x80005fffff 64bit pref] Apr 17 23:27:37.919744 kernel: pci 0000:04:00.0: BAR 4: assigned [mem 0x8000600000-0x8000603fff 64bit pref] Apr 17 23:27:37.919958 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Apr 17 23:27:37.920031 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Apr 17 23:27:37.920099 kernel: pci 0000:00:02.3: bridge window [mem 0x10600000-0x107fffff] Apr 17 23:27:37.920171 kernel: pci 0000:00:02.3: bridge window [mem 0x8000600000-0x80007fffff 64bit pref] Apr 17 23:27:37.920248 kernel: pci 0000:05:00.0: BAR 4: assigned [mem 0x8000800000-0x8000803fff 64bit pref] Apr 17 23:27:37.920324 kernel: pci 0000:05:00.0: BAR 1: assigned [mem 0x10800000-0x10800fff] Apr 17 23:27:37.920397 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Apr 17 23:27:37.920486 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Apr 17 23:27:37.920557 kernel: pci 0000:00:02.4: bridge window [mem 0x10800000-0x109fffff] Apr 17 23:27:37.920628 kernel: pci 0000:00:02.4: bridge window [mem 0x8000800000-0x80009fffff 64bit pref] Apr 17 23:27:37.923623 kernel: pci 0000:06:00.0: BAR 4: assigned [mem 0x8000a00000-0x8000a03fff 64bit pref] Apr 17 23:27:37.923710 kernel: pci 0000:06:00.0: BAR 1: assigned [mem 0x10a00000-0x10a00fff] Apr 17 23:27:37.923783 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Apr 17 23:27:37.923859 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Apr 17 23:27:37.924110 kernel: pci 0000:00:02.5: bridge window [mem 0x10a00000-0x10bfffff] Apr 17 23:27:37.924193 kernel: pci 0000:00:02.5: bridge window [mem 0x8000a00000-0x8000bfffff 64bit pref] Apr 17 23:27:37.924275 kernel: pci 0000:07:00.0: BAR 6: assigned [mem 0x10c00000-0x10c7ffff pref] Apr 17 23:27:37.924349 kernel: pci 0000:07:00.0: BAR 4: assigned [mem 0x8000c00000-0x8000c03fff 64bit pref] Apr 17 23:27:37.924422 kernel: pci 0000:07:00.0: BAR 1: assigned [mem 0x10c80000-0x10c80fff] Apr 17 23:27:37.924516 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Apr 17 23:27:37.924592 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Apr 17 23:27:37.924669 kernel: pci 0000:00:02.6: bridge window [mem 0x10c00000-0x10dfffff] Apr 17 23:27:37.924738 kernel: pci 0000:00:02.6: bridge window [mem 0x8000c00000-0x8000dfffff 64bit pref] Apr 17 23:27:37.924810 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Apr 17 23:27:37.924879 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Apr 17 23:27:37.924970 kernel: pci 0000:00:02.7: bridge window [mem 0x10e00000-0x10ffffff] Apr 17 23:27:37.925041 kernel: pci 0000:00:02.7: bridge window [mem 0x8000e00000-0x8000ffffff 64bit pref] Apr 17 23:27:37.925114 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Apr 17 23:27:37.925183 kernel: pci 0000:00:03.0: bridge window [io 0x9000-0x9fff] Apr 17 23:27:37.925256 kernel: pci 0000:00:03.0: bridge window [mem 0x11000000-0x111fffff] Apr 17 23:27:37.925323 kernel: pci 0000:00:03.0: bridge window [mem 0x8001000000-0x80011fffff 64bit pref] Apr 17 23:27:37.925395 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Apr 17 23:27:37.926426 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Apr 17 23:27:37.926562 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Apr 17 23:27:37.926646 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Apr 17 23:27:37.926708 kernel: pci_bus 0000:01: resource 1 [mem 0x10000000-0x101fffff] Apr 17 23:27:37.926773 kernel: pci_bus 0000:01: resource 2 [mem 0x8000000000-0x80001fffff 64bit pref] Apr 17 23:27:37.926840 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x2fff] Apr 17 23:27:37.926899 kernel: pci_bus 0000:02: resource 1 [mem 0x10200000-0x103fffff] Apr 17 23:27:37.927007 kernel: pci_bus 0000:02: resource 2 [mem 0x8000200000-0x80003fffff 64bit pref] Apr 17 23:27:37.927077 kernel: pci_bus 0000:03: resource 0 [io 0x3000-0x3fff] Apr 17 23:27:37.927139 kernel: pci_bus 0000:03: resource 1 [mem 0x10400000-0x105fffff] Apr 17 23:27:37.927204 kernel: pci_bus 0000:03: resource 2 [mem 0x8000400000-0x80005fffff 64bit pref] Apr 17 23:27:37.927271 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] Apr 17 23:27:37.927331 kernel: pci_bus 0000:04: resource 1 [mem 0x10600000-0x107fffff] Apr 17 23:27:37.927405 kernel: pci_bus 0000:04: resource 2 [mem 0x8000600000-0x80007fffff 64bit pref] Apr 17 23:27:37.927574 kernel: pci_bus 0000:05: resource 0 [io 0x5000-0x5fff] Apr 17 23:27:37.927644 kernel: pci_bus 0000:05: resource 1 [mem 0x10800000-0x109fffff] Apr 17 23:27:37.927702 kernel: pci_bus 0000:05: resource 2 [mem 0x8000800000-0x80009fffff 64bit pref] Apr 17 23:27:37.927773 kernel: pci_bus 0000:06: resource 0 [io 0x6000-0x6fff] Apr 17 23:27:37.927832 kernel: pci_bus 0000:06: resource 1 [mem 0x10a00000-0x10bfffff] Apr 17 23:27:37.927892 kernel: pci_bus 0000:06: resource 2 [mem 0x8000a00000-0x8000bfffff 64bit pref] Apr 17 23:27:37.927979 kernel: pci_bus 0000:07: resource 0 [io 0x7000-0x7fff] Apr 17 23:27:37.928043 kernel: pci_bus 0000:07: resource 1 [mem 0x10c00000-0x10dfffff] Apr 17 23:27:37.928101 kernel: pci_bus 0000:07: resource 2 [mem 0x8000c00000-0x8000dfffff 64bit pref] Apr 17 23:27:37.928166 kernel: pci_bus 0000:08: resource 0 [io 0x8000-0x8fff] Apr 17 23:27:37.928225 kernel: pci_bus 0000:08: resource 1 [mem 0x10e00000-0x10ffffff] Apr 17 23:27:37.928283 kernel: pci_bus 0000:08: resource 2 [mem 0x8000e00000-0x8000ffffff 64bit pref] Apr 17 23:27:37.928348 kernel: pci_bus 0000:09: resource 0 [io 0x9000-0x9fff] Apr 17 23:27:37.928408 kernel: pci_bus 0000:09: resource 1 [mem 0x11000000-0x111fffff] Apr 17 23:27:37.928547 kernel: pci_bus 0000:09: resource 2 [mem 0x8001000000-0x80011fffff 64bit pref] Apr 17 23:27:37.928560 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Apr 17 23:27:37.928568 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Apr 17 23:27:37.928576 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Apr 17 23:27:37.928584 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Apr 17 23:27:37.928592 kernel: iommu: Default domain type: Translated Apr 17 23:27:37.928600 kernel: iommu: DMA domain TLB invalidation policy: strict mode Apr 17 23:27:37.928607 kernel: efivars: Registered efivars operations Apr 17 23:27:37.928619 kernel: vgaarb: loaded Apr 17 23:27:37.928627 kernel: clocksource: Switched to clocksource arch_sys_counter Apr 17 23:27:37.928634 kernel: VFS: Disk quotas dquot_6.6.0 Apr 17 23:27:37.928642 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 17 23:27:37.928650 kernel: pnp: PnP ACPI init Apr 17 23:27:37.928731 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Apr 17 23:27:37.928743 kernel: pnp: PnP ACPI: found 1 devices Apr 17 23:27:37.928751 kernel: NET: Registered PF_INET protocol family Apr 17 23:27:37.928759 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 17 23:27:37.928769 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Apr 17 23:27:37.928777 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 17 23:27:37.928786 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Apr 17 23:27:37.928794 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Apr 17 23:27:37.928802 kernel: TCP: Hash tables configured (established 32768 bind 32768) Apr 17 23:27:37.928810 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 17 23:27:37.928818 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 17 23:27:37.928826 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 17 23:27:37.928898 kernel: pci 0000:02:00.0: enabling device (0000 -> 0002) Apr 17 23:27:37.928926 kernel: PCI: CLS 0 bytes, default 64 Apr 17 23:27:37.928934 kernel: kvm [1]: HYP mode not available Apr 17 23:27:37.928942 kernel: Initialise system trusted keyrings Apr 17 23:27:37.928950 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Apr 17 23:27:37.928958 kernel: Key type asymmetric registered Apr 17 23:27:37.928966 kernel: Asymmetric key parser 'x509' registered Apr 17 23:27:37.928973 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Apr 17 23:27:37.928981 kernel: io scheduler mq-deadline registered Apr 17 23:27:37.928989 kernel: io scheduler kyber registered Apr 17 23:27:37.928999 kernel: io scheduler bfq registered Apr 17 23:27:37.929007 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Apr 17 23:27:37.929082 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 50 Apr 17 23:27:37.929149 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 50 Apr 17 23:27:37.929215 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 17 23:27:37.929284 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 51 Apr 17 23:27:37.929353 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 51 Apr 17 23:27:37.929417 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 17 23:27:37.929714 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 52 Apr 17 23:27:37.929791 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 52 Apr 17 23:27:37.929856 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 17 23:27:37.929975 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 53 Apr 17 23:27:37.930053 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 53 Apr 17 23:27:37.930117 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 17 23:27:37.930185 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 54 Apr 17 23:27:37.930250 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 54 Apr 17 23:27:37.930315 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 17 23:27:37.930382 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 55 Apr 17 23:27:37.930615 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 55 Apr 17 23:27:37.930705 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 17 23:27:37.930774 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 56 Apr 17 23:27:37.930838 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 56 Apr 17 23:27:37.930921 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 17 23:27:37.931001 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 57 Apr 17 23:27:37.931312 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 57 Apr 17 23:27:37.931390 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 17 23:27:37.931402 kernel: ACPI: \_SB_.PCI0.GSI3: Enabled at IRQ 38 Apr 17 23:27:37.931488 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 58 Apr 17 23:27:37.931557 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 58 Apr 17 23:27:37.931621 kernel: pcieport 0000:00:03.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 17 23:27:37.931632 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Apr 17 23:27:37.931645 kernel: ACPI: button: Power Button [PWRB] Apr 17 23:27:37.931653 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Apr 17 23:27:37.931754 kernel: virtio-pci 0000:04:00.0: enabling device (0000 -> 0002) Apr 17 23:27:37.931832 kernel: virtio-pci 0000:07:00.0: enabling device (0000 -> 0002) Apr 17 23:27:37.931844 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 17 23:27:37.931852 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Apr 17 23:27:37.931979 kernel: serial 0000:00:04.0: enabling device (0000 -> 0001) Apr 17 23:27:37.931994 kernel: 0000:00:04.0: ttyS0 at I/O 0xa000 (irq = 45, base_baud = 115200) is a 16550A Apr 17 23:27:37.932002 kernel: thunder_xcv, ver 1.0 Apr 17 23:27:37.932014 kernel: thunder_bgx, ver 1.0 Apr 17 23:27:37.932022 kernel: nicpf, ver 1.0 Apr 17 23:27:37.932030 kernel: nicvf, ver 1.0 Apr 17 23:27:37.932115 kernel: rtc-efi rtc-efi.0: registered as rtc0 Apr 17 23:27:37.932179 kernel: rtc-efi rtc-efi.0: setting system clock to 2026-04-17T23:27:37 UTC (1776468457) Apr 17 23:27:37.932189 kernel: hid: raw HID events driver (C) Jiri Kosina Apr 17 23:27:37.932198 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Apr 17 23:27:37.932206 kernel: watchdog: Delayed init of the lockup detector failed: -19 Apr 17 23:27:37.932217 kernel: watchdog: Hard watchdog permanently disabled Apr 17 23:27:37.932224 kernel: NET: Registered PF_INET6 protocol family Apr 17 23:27:37.932232 kernel: Segment Routing with IPv6 Apr 17 23:27:37.932240 kernel: In-situ OAM (IOAM) with IPv6 Apr 17 23:27:37.932247 kernel: NET: Registered PF_PACKET protocol family Apr 17 23:27:37.932256 kernel: Key type dns_resolver registered Apr 17 23:27:37.932263 kernel: registered taskstats version 1 Apr 17 23:27:37.932271 kernel: Loading compiled-in X.509 certificates Apr 17 23:27:37.932279 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: 1161289bfc8d953baa9f687fefeecf0e077bc535' Apr 17 23:27:37.932288 kernel: Key type .fscrypt registered Apr 17 23:27:37.932295 kernel: Key type fscrypt-provisioning registered Apr 17 23:27:37.932303 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 17 23:27:37.932311 kernel: ima: Allocated hash algorithm: sha1 Apr 17 23:27:37.932319 kernel: ima: No architecture policies found Apr 17 23:27:37.932327 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Apr 17 23:27:37.932334 kernel: clk: Disabling unused clocks Apr 17 23:27:37.932342 kernel: Freeing unused kernel memory: 39424K Apr 17 23:27:37.932350 kernel: Run /init as init process Apr 17 23:27:37.932359 kernel: with arguments: Apr 17 23:27:37.932367 kernel: /init Apr 17 23:27:37.932374 kernel: with environment: Apr 17 23:27:37.932382 kernel: HOME=/ Apr 17 23:27:37.932389 kernel: TERM=linux Apr 17 23:27:37.932399 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 17 23:27:37.932409 systemd[1]: Detected virtualization kvm. Apr 17 23:27:37.932417 systemd[1]: Detected architecture arm64. Apr 17 23:27:37.932427 systemd[1]: Running in initrd. Apr 17 23:27:37.932435 systemd[1]: No hostname configured, using default hostname. Apr 17 23:27:37.932443 systemd[1]: Hostname set to . Apr 17 23:27:37.932507 systemd[1]: Initializing machine ID from VM UUID. Apr 17 23:27:37.932517 systemd[1]: Queued start job for default target initrd.target. Apr 17 23:27:37.932526 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 17 23:27:37.932534 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 17 23:27:37.932543 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 17 23:27:37.932555 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 17 23:27:37.932565 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 17 23:27:37.932574 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 17 23:27:37.932584 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 17 23:27:37.932592 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 17 23:27:37.932601 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 17 23:27:37.932609 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 17 23:27:37.932619 systemd[1]: Reached target paths.target - Path Units. Apr 17 23:27:37.932627 systemd[1]: Reached target slices.target - Slice Units. Apr 17 23:27:37.932636 systemd[1]: Reached target swap.target - Swaps. Apr 17 23:27:37.932644 systemd[1]: Reached target timers.target - Timer Units. Apr 17 23:27:37.932652 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 17 23:27:37.932660 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 17 23:27:37.932669 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 17 23:27:37.932677 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 17 23:27:37.932687 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 17 23:27:37.932695 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 17 23:27:37.932703 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 17 23:27:37.932711 systemd[1]: Reached target sockets.target - Socket Units. Apr 17 23:27:37.932720 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 17 23:27:37.932728 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 17 23:27:37.932737 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 17 23:27:37.932745 systemd[1]: Starting systemd-fsck-usr.service... Apr 17 23:27:37.932753 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 17 23:27:37.932763 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 17 23:27:37.932771 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 17 23:27:37.932780 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 17 23:27:37.932811 systemd-journald[237]: Collecting audit messages is disabled. Apr 17 23:27:37.932834 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 17 23:27:37.932843 systemd[1]: Finished systemd-fsck-usr.service. Apr 17 23:27:37.932852 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 17 23:27:37.932860 kernel: Bridge firewalling registered Apr 17 23:27:37.932870 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 17 23:27:37.932879 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 17 23:27:37.932887 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 17 23:27:37.932896 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 17 23:27:37.932933 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 17 23:27:37.932944 systemd-journald[237]: Journal started Apr 17 23:27:37.932966 systemd-journald[237]: Runtime Journal (/run/log/journal/df094abc10b94a6bac8389ab4f3e7bc5) is 8.0M, max 76.6M, 68.6M free. Apr 17 23:27:37.889519 systemd-modules-load[238]: Inserted module 'overlay' Apr 17 23:27:37.913584 systemd-modules-load[238]: Inserted module 'br_netfilter' Apr 17 23:27:37.937727 systemd[1]: Started systemd-journald.service - Journal Service. Apr 17 23:27:37.945732 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 17 23:27:37.949260 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 17 23:27:37.953651 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 17 23:27:37.957664 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 17 23:27:37.967556 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 17 23:27:37.971730 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 17 23:27:37.978666 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 17 23:27:37.984385 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 17 23:27:37.991704 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 17 23:27:38.013318 dracut-cmdline[275]: dracut-dracut-053 Apr 17 23:27:38.014589 systemd-resolved[269]: Positive Trust Anchors: Apr 17 23:27:38.014605 systemd-resolved[269]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 17 23:27:38.014636 systemd-resolved[269]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 17 23:27:38.020143 systemd-resolved[269]: Defaulting to hostname 'linux'. Apr 17 23:27:38.023542 dracut-cmdline[275]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=f77c53ef012912081447488e689e924a7faa1d92b63ab5dfeba9709e9511e349 Apr 17 23:27:38.021272 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 17 23:27:38.022886 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 17 23:27:38.092503 kernel: SCSI subsystem initialized Apr 17 23:27:38.096487 kernel: Loading iSCSI transport class v2.0-870. Apr 17 23:27:38.104719 kernel: iscsi: registered transport (tcp) Apr 17 23:27:38.117543 kernel: iscsi: registered transport (qla4xxx) Apr 17 23:27:38.117629 kernel: QLogic iSCSI HBA Driver Apr 17 23:27:38.158401 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 17 23:27:38.163741 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 17 23:27:38.185476 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 17 23:27:38.186644 kernel: device-mapper: uevent: version 1.0.3 Apr 17 23:27:38.186683 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Apr 17 23:27:38.239510 kernel: raid6: neonx8 gen() 15716 MB/s Apr 17 23:27:38.256502 kernel: raid6: neonx4 gen() 15559 MB/s Apr 17 23:27:38.273684 kernel: raid6: neonx2 gen() 13117 MB/s Apr 17 23:27:38.290543 kernel: raid6: neonx1 gen() 10438 MB/s Apr 17 23:27:38.307560 kernel: raid6: int64x8 gen() 6931 MB/s Apr 17 23:27:38.324971 kernel: raid6: int64x4 gen() 7319 MB/s Apr 17 23:27:38.341510 kernel: raid6: int64x2 gen() 6101 MB/s Apr 17 23:27:38.358885 kernel: raid6: int64x1 gen() 5024 MB/s Apr 17 23:27:38.358985 kernel: raid6: using algorithm neonx8 gen() 15716 MB/s Apr 17 23:27:38.375527 kernel: raid6: .... xor() 11918 MB/s, rmw enabled Apr 17 23:27:38.375619 kernel: raid6: using neon recovery algorithm Apr 17 23:27:38.380503 kernel: xor: measuring software checksum speed Apr 17 23:27:38.380571 kernel: 8regs : 19052 MB/sec Apr 17 23:27:38.380596 kernel: 32regs : 17359 MB/sec Apr 17 23:27:38.382051 kernel: arm64_neon : 22911 MB/sec Apr 17 23:27:38.382124 kernel: xor: using function: arm64_neon (22911 MB/sec) Apr 17 23:27:38.431497 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 17 23:27:38.448502 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 17 23:27:38.453717 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 17 23:27:38.478188 systemd-udevd[456]: Using default interface naming scheme 'v255'. Apr 17 23:27:38.481642 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 17 23:27:38.491074 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 17 23:27:38.509784 dracut-pre-trigger[463]: rd.md=0: removing MD RAID activation Apr 17 23:27:38.550838 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 17 23:27:38.558671 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 17 23:27:38.609864 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 17 23:27:38.616839 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 17 23:27:38.640512 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 17 23:27:38.641400 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 17 23:27:38.642336 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 17 23:27:38.644786 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 17 23:27:38.652486 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 17 23:27:38.672415 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 17 23:27:38.721687 kernel: scsi host0: Virtio SCSI HBA Apr 17 23:27:38.723501 kernel: ACPI: bus type USB registered Apr 17 23:27:38.723553 kernel: usbcore: registered new interface driver usbfs Apr 17 23:27:38.729881 kernel: usbcore: registered new interface driver hub Apr 17 23:27:38.729957 kernel: usbcore: registered new device driver usb Apr 17 23:27:38.733538 kernel: scsi 0:0:0:0: CD-ROM QEMU QEMU CD-ROM 2.5+ PQ: 0 ANSI: 5 Apr 17 23:27:38.742480 kernel: scsi 0:0:0:1: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Apr 17 23:27:38.752041 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 17 23:27:38.752173 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 17 23:27:38.754381 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 17 23:27:38.760586 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 17 23:27:38.760765 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 17 23:27:38.763020 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 17 23:27:38.773797 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 17 23:27:38.781483 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Apr 17 23:27:38.781679 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Apr 17 23:27:38.785829 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Apr 17 23:27:38.786700 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Apr 17 23:27:38.786859 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Apr 17 23:27:38.788572 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Apr 17 23:27:38.791592 kernel: hub 1-0:1.0: USB hub found Apr 17 23:27:38.791774 kernel: hub 1-0:1.0: 4 ports detected Apr 17 23:27:38.791857 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Apr 17 23:27:38.792021 kernel: hub 2-0:1.0: USB hub found Apr 17 23:27:38.792118 kernel: hub 2-0:1.0: 4 ports detected Apr 17 23:27:38.798731 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 17 23:27:38.808951 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 17 23:27:38.813648 kernel: sr 0:0:0:0: Power-on or device reset occurred Apr 17 23:27:38.815747 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 16x/50x cd/rw xa/form2 cdda tray Apr 17 23:27:38.816016 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Apr 17 23:27:38.818520 kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0 Apr 17 23:27:38.823070 kernel: sd 0:0:0:1: Power-on or device reset occurred Apr 17 23:27:38.823280 kernel: sd 0:0:0:1: [sda] 80003072 512-byte logical blocks: (41.0 GB/38.1 GiB) Apr 17 23:27:38.823370 kernel: sd 0:0:0:1: [sda] Write Protect is off Apr 17 23:27:38.823868 kernel: sd 0:0:0:1: [sda] Mode Sense: 63 00 00 08 Apr 17 23:27:38.824025 kernel: sd 0:0:0:1: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Apr 17 23:27:38.831378 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Apr 17 23:27:38.831438 kernel: GPT:17805311 != 80003071 Apr 17 23:27:38.831472 kernel: GPT:Alternate GPT header not at the end of the disk. Apr 17 23:27:38.831484 kernel: GPT:17805311 != 80003071 Apr 17 23:27:38.831494 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 17 23:27:38.831504 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 17 23:27:38.832721 kernel: sd 0:0:0:1: [sda] Attached SCSI disk Apr 17 23:27:38.833433 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 17 23:27:38.882481 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/sda6 scanned by (udev-worker) (509) Apr 17 23:27:38.889482 kernel: BTRFS: device fsid 6218981f-ef91-4196-be05-d5f6a224b350 devid 1 transid 32 /dev/sda3 scanned by (udev-worker) (523) Apr 17 23:27:38.892748 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Apr 17 23:27:38.900466 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Apr 17 23:27:38.910257 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Apr 17 23:27:38.917154 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Apr 17 23:27:38.918363 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Apr 17 23:27:38.929940 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 17 23:27:38.935443 disk-uuid[575]: Primary Header is updated. Apr 17 23:27:38.935443 disk-uuid[575]: Secondary Entries is updated. Apr 17 23:27:38.935443 disk-uuid[575]: Secondary Header is updated. Apr 17 23:27:38.941549 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 17 23:27:39.033521 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Apr 17 23:27:39.169679 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input1 Apr 17 23:27:39.169759 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Apr 17 23:27:39.170048 kernel: usbcore: registered new interface driver usbhid Apr 17 23:27:39.170498 kernel: usbhid: USB HID core driver Apr 17 23:27:39.276486 kernel: usb 1-2: new high-speed USB device number 3 using xhci_hcd Apr 17 23:27:39.407488 kernel: input: QEMU QEMU USB Keyboard as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-2/1-2:1.0/0003:0627:0001.0002/input/input2 Apr 17 23:27:39.462505 kernel: hid-generic 0003:0627:0001.0002: input,hidraw1: USB HID v1.11 Keyboard [QEMU QEMU USB Keyboard] on usb-0000:02:00.0-2/input0 Apr 17 23:27:39.961473 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 17 23:27:39.961713 disk-uuid[576]: The operation has completed successfully. Apr 17 23:27:40.016266 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 17 23:27:40.016382 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 17 23:27:40.033799 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 17 23:27:40.040973 sh[594]: Success Apr 17 23:27:40.056489 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Apr 17 23:27:40.103975 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 17 23:27:40.115852 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 17 23:27:40.118144 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 17 23:27:40.134877 kernel: BTRFS info (device dm-0): first mount of filesystem 6218981f-ef91-4196-be05-d5f6a224b350 Apr 17 23:27:40.134965 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Apr 17 23:27:40.134990 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Apr 17 23:27:40.136206 kernel: BTRFS info (device dm-0): disabling log replay at mount time Apr 17 23:27:40.136797 kernel: BTRFS info (device dm-0): using free space tree Apr 17 23:27:40.146493 kernel: BTRFS info (device dm-0): enabling ssd optimizations Apr 17 23:27:40.149486 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 17 23:27:40.154209 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 17 23:27:40.174957 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 17 23:27:40.180669 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 17 23:27:40.195423 kernel: BTRFS info (device sda6): first mount of filesystem 511634b8-962b-4ed3-9161-3f02d13492ea Apr 17 23:27:40.195503 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Apr 17 23:27:40.196511 kernel: BTRFS info (device sda6): using free space tree Apr 17 23:27:40.202839 kernel: BTRFS info (device sda6): enabling ssd optimizations Apr 17 23:27:40.202947 kernel: BTRFS info (device sda6): auto enabling async discard Apr 17 23:27:40.216190 systemd[1]: mnt-oem.mount: Deactivated successfully. Apr 17 23:27:40.217766 kernel: BTRFS info (device sda6): last unmount of filesystem 511634b8-962b-4ed3-9161-3f02d13492ea Apr 17 23:27:40.225834 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 17 23:27:40.231143 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 17 23:27:40.299131 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 17 23:27:40.308658 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 17 23:27:40.332524 ignition[699]: Ignition 2.19.0 Apr 17 23:27:40.335585 systemd-networkd[780]: lo: Link UP Apr 17 23:27:40.332534 ignition[699]: Stage: fetch-offline Apr 17 23:27:40.335590 systemd-networkd[780]: lo: Gained carrier Apr 17 23:27:40.332568 ignition[699]: no configs at "/usr/lib/ignition/base.d" Apr 17 23:27:40.335998 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 17 23:27:40.332580 ignition[699]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 17 23:27:40.337678 systemd-networkd[780]: Enumeration completed Apr 17 23:27:40.333039 ignition[699]: parsed url from cmdline: "" Apr 17 23:27:40.339041 systemd-networkd[780]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 17 23:27:40.333043 ignition[699]: no config URL provided Apr 17 23:27:40.339045 systemd-networkd[780]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 17 23:27:40.333050 ignition[699]: reading system config file "/usr/lib/ignition/user.ign" Apr 17 23:27:40.341078 systemd-networkd[780]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 17 23:27:40.333060 ignition[699]: no config at "/usr/lib/ignition/user.ign" Apr 17 23:27:40.341082 systemd-networkd[780]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 17 23:27:40.333066 ignition[699]: failed to fetch config: resource requires networking Apr 17 23:27:40.341177 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 17 23:27:40.333296 ignition[699]: Ignition finished successfully Apr 17 23:27:40.342306 systemd-networkd[780]: eth0: Link UP Apr 17 23:27:40.342311 systemd-networkd[780]: eth0: Gained carrier Apr 17 23:27:40.342321 systemd-networkd[780]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 17 23:27:40.343417 systemd[1]: Reached target network.target - Network. Apr 17 23:27:40.350534 systemd-networkd[780]: eth1: Link UP Apr 17 23:27:40.350538 systemd-networkd[780]: eth1: Gained carrier Apr 17 23:27:40.350547 systemd-networkd[780]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 17 23:27:40.351046 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Apr 17 23:27:40.363964 ignition[783]: Ignition 2.19.0 Apr 17 23:27:40.363973 ignition[783]: Stage: fetch Apr 17 23:27:40.364209 ignition[783]: no configs at "/usr/lib/ignition/base.d" Apr 17 23:27:40.364220 ignition[783]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 17 23:27:40.364322 ignition[783]: parsed url from cmdline: "" Apr 17 23:27:40.364325 ignition[783]: no config URL provided Apr 17 23:27:40.364330 ignition[783]: reading system config file "/usr/lib/ignition/user.ign" Apr 17 23:27:40.364337 ignition[783]: no config at "/usr/lib/ignition/user.ign" Apr 17 23:27:40.364354 ignition[783]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 Apr 17 23:27:40.364976 ignition[783]: GET error: Get "http://169.254.169.254/hetzner/v1/userdata": dial tcp 169.254.169.254:80: connect: network is unreachable Apr 17 23:27:40.391555 systemd-networkd[780]: eth1: DHCPv4 address 10.0.0.3/32 acquired from 10.0.0.1 Apr 17 23:27:40.407559 systemd-networkd[780]: eth0: DHCPv4 address 159.69.127.159/32, gateway 172.31.1.1 acquired from 172.31.1.1 Apr 17 23:27:40.566044 ignition[783]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #2 Apr 17 23:27:40.575173 ignition[783]: GET result: OK Apr 17 23:27:40.576038 ignition[783]: parsing config with SHA512: 73066af55b6c7beae4cd7e87a8e3d196d86051135f0a60f555f046b8cd0d1e9705b07499112b5771f012084d6cd58584672a595d9315741b1c700a4e011dd050 Apr 17 23:27:40.581682 unknown[783]: fetched base config from "system" Apr 17 23:27:40.582092 ignition[783]: fetch: fetch complete Apr 17 23:27:40.581694 unknown[783]: fetched base config from "system" Apr 17 23:27:40.582097 ignition[783]: fetch: fetch passed Apr 17 23:27:40.581699 unknown[783]: fetched user config from "hetzner" Apr 17 23:27:40.582151 ignition[783]: Ignition finished successfully Apr 17 23:27:40.584676 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Apr 17 23:27:40.589784 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 17 23:27:40.603308 ignition[791]: Ignition 2.19.0 Apr 17 23:27:40.603317 ignition[791]: Stage: kargs Apr 17 23:27:40.603530 ignition[791]: no configs at "/usr/lib/ignition/base.d" Apr 17 23:27:40.603540 ignition[791]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 17 23:27:40.605722 ignition[791]: kargs: kargs passed Apr 17 23:27:40.605784 ignition[791]: Ignition finished successfully Apr 17 23:27:40.608519 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 17 23:27:40.612719 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 17 23:27:40.643766 ignition[797]: Ignition 2.19.0 Apr 17 23:27:40.644497 ignition[797]: Stage: disks Apr 17 23:27:40.645065 ignition[797]: no configs at "/usr/lib/ignition/base.d" Apr 17 23:27:40.645076 ignition[797]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 17 23:27:40.647517 ignition[797]: disks: disks passed Apr 17 23:27:40.647968 ignition[797]: Ignition finished successfully Apr 17 23:27:40.650699 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 17 23:27:40.652090 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 17 23:27:40.653545 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 17 23:27:40.655037 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 17 23:27:40.655582 systemd[1]: Reached target sysinit.target - System Initialization. Apr 17 23:27:40.656136 systemd[1]: Reached target basic.target - Basic System. Apr 17 23:27:40.666986 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 17 23:27:40.686328 systemd-fsck[806]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Apr 17 23:27:40.690630 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 17 23:27:40.700621 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 17 23:27:40.756279 kernel: EXT4-fs (sda9): mounted filesystem 2a4b2d55-130a-4cda-bef1-b1e6ed7bcf6b r/w with ordered data mode. Quota mode: none. Apr 17 23:27:40.756753 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 17 23:27:40.758145 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 17 23:27:40.764628 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 17 23:27:40.767768 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 17 23:27:40.771629 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Apr 17 23:27:40.777715 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 17 23:27:40.779188 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by mount (814) Apr 17 23:27:40.779309 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 17 23:27:40.782566 kernel: BTRFS info (device sda6): first mount of filesystem 511634b8-962b-4ed3-9161-3f02d13492ea Apr 17 23:27:40.782591 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Apr 17 23:27:40.782604 kernel: BTRFS info (device sda6): using free space tree Apr 17 23:27:40.783837 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 17 23:27:40.790707 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 17 23:27:40.795424 kernel: BTRFS info (device sda6): enabling ssd optimizations Apr 17 23:27:40.795467 kernel: BTRFS info (device sda6): auto enabling async discard Apr 17 23:27:40.801077 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 17 23:27:40.841960 coreos-metadata[816]: Apr 17 23:27:40.841 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 Apr 17 23:27:40.844193 coreos-metadata[816]: Apr 17 23:27:40.843 INFO Fetch successful Apr 17 23:27:40.845153 coreos-metadata[816]: Apr 17 23:27:40.844 INFO wrote hostname ci-4081-3-6-n-6417c65d59 to /sysroot/etc/hostname Apr 17 23:27:40.847713 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Apr 17 23:27:40.850225 initrd-setup-root[844]: cut: /sysroot/etc/passwd: No such file or directory Apr 17 23:27:40.856245 initrd-setup-root[851]: cut: /sysroot/etc/group: No such file or directory Apr 17 23:27:40.862797 initrd-setup-root[858]: cut: /sysroot/etc/shadow: No such file or directory Apr 17 23:27:40.867989 initrd-setup-root[865]: cut: /sysroot/etc/gshadow: No such file or directory Apr 17 23:27:40.971124 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 17 23:27:40.975599 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 17 23:27:40.977250 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 17 23:27:40.990480 kernel: BTRFS info (device sda6): last unmount of filesystem 511634b8-962b-4ed3-9161-3f02d13492ea Apr 17 23:27:41.007514 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 17 23:27:41.013345 ignition[934]: INFO : Ignition 2.19.0 Apr 17 23:27:41.013345 ignition[934]: INFO : Stage: mount Apr 17 23:27:41.016203 ignition[934]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 17 23:27:41.016203 ignition[934]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 17 23:27:41.016203 ignition[934]: INFO : mount: mount passed Apr 17 23:27:41.016203 ignition[934]: INFO : Ignition finished successfully Apr 17 23:27:41.019563 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 17 23:27:41.023570 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 17 23:27:41.135396 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 17 23:27:41.148826 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 17 23:27:41.160214 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (944) Apr 17 23:27:41.160298 kernel: BTRFS info (device sda6): first mount of filesystem 511634b8-962b-4ed3-9161-3f02d13492ea Apr 17 23:27:41.160326 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Apr 17 23:27:41.161514 kernel: BTRFS info (device sda6): using free space tree Apr 17 23:27:41.164583 kernel: BTRFS info (device sda6): enabling ssd optimizations Apr 17 23:27:41.164627 kernel: BTRFS info (device sda6): auto enabling async discard Apr 17 23:27:41.168710 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 17 23:27:41.188388 ignition[960]: INFO : Ignition 2.19.0 Apr 17 23:27:41.189244 ignition[960]: INFO : Stage: files Apr 17 23:27:41.190019 ignition[960]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 17 23:27:41.190799 ignition[960]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 17 23:27:41.191987 ignition[960]: DEBUG : files: compiled without relabeling support, skipping Apr 17 23:27:41.193473 ignition[960]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 17 23:27:41.193473 ignition[960]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 17 23:27:41.198521 ignition[960]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 17 23:27:41.199666 ignition[960]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 17 23:27:41.201030 unknown[960]: wrote ssh authorized keys file for user: core Apr 17 23:27:41.202241 ignition[960]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 17 23:27:41.204869 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Apr 17 23:27:41.206084 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Apr 17 23:27:41.294999 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Apr 17 23:27:41.562193 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Apr 17 23:27:41.562193 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Apr 17 23:27:41.565176 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Apr 17 23:27:41.565176 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 17 23:27:41.565176 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 17 23:27:41.565176 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 17 23:27:41.565176 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 17 23:27:41.565176 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 17 23:27:41.565176 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 17 23:27:41.565176 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 17 23:27:41.565176 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 17 23:27:41.565176 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.4-arm64.raw" Apr 17 23:27:41.565176 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.4-arm64.raw" Apr 17 23:27:41.565176 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.4-arm64.raw" Apr 17 23:27:41.565176 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.4-arm64.raw: attempt #1 Apr 17 23:27:41.880637 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Apr 17 23:27:41.885593 systemd-networkd[780]: eth1: Gained IPv6LL Apr 17 23:27:42.270276 systemd-networkd[780]: eth0: Gained IPv6LL Apr 17 23:27:42.447361 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.4-arm64.raw" Apr 17 23:27:42.447361 ignition[960]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Apr 17 23:27:42.450540 ignition[960]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 17 23:27:42.450540 ignition[960]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 17 23:27:42.450540 ignition[960]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Apr 17 23:27:42.450540 ignition[960]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Apr 17 23:27:42.450540 ignition[960]: INFO : files: op(d): op(e): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Apr 17 23:27:42.450540 ignition[960]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Apr 17 23:27:42.450540 ignition[960]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Apr 17 23:27:42.450540 ignition[960]: INFO : files: op(f): [started] setting preset to enabled for "prepare-helm.service" Apr 17 23:27:42.450540 ignition[960]: INFO : files: op(f): [finished] setting preset to enabled for "prepare-helm.service" Apr 17 23:27:42.450540 ignition[960]: INFO : files: createResultFile: createFiles: op(10): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 17 23:27:42.450540 ignition[960]: INFO : files: createResultFile: createFiles: op(10): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 17 23:27:42.450540 ignition[960]: INFO : files: files passed Apr 17 23:27:42.450540 ignition[960]: INFO : Ignition finished successfully Apr 17 23:27:42.452385 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 17 23:27:42.463315 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 17 23:27:42.466655 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 17 23:27:42.469995 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 17 23:27:42.472557 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 17 23:27:42.492928 initrd-setup-root-after-ignition[989]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 17 23:27:42.492928 initrd-setup-root-after-ignition[989]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 17 23:27:42.496422 initrd-setup-root-after-ignition[993]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 17 23:27:42.499242 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 17 23:27:42.500737 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 17 23:27:42.510750 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 17 23:27:42.562034 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 17 23:27:42.563545 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 17 23:27:42.565079 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 17 23:27:42.568618 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 17 23:27:42.569336 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 17 23:27:42.577540 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 17 23:27:42.591018 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 17 23:27:42.597771 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 17 23:27:42.613031 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 17 23:27:42.614718 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 17 23:27:42.616325 systemd[1]: Stopped target timers.target - Timer Units. Apr 17 23:27:42.617829 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 17 23:27:42.618032 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 17 23:27:42.621893 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 17 23:27:42.623086 systemd[1]: Stopped target basic.target - Basic System. Apr 17 23:27:42.624188 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 17 23:27:42.625270 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 17 23:27:42.626645 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 17 23:27:42.627936 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 17 23:27:42.629300 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 17 23:27:42.630592 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 17 23:27:42.631913 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 17 23:27:42.633037 systemd[1]: Stopped target swap.target - Swaps. Apr 17 23:27:42.634028 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 17 23:27:42.634201 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 17 23:27:42.635534 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 17 23:27:42.637091 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 17 23:27:42.638225 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 17 23:27:42.641548 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 17 23:27:42.643226 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 17 23:27:42.643417 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 17 23:27:42.645256 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 17 23:27:42.645434 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 17 23:27:42.646820 systemd[1]: ignition-files.service: Deactivated successfully. Apr 17 23:27:42.647002 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 17 23:27:42.648048 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Apr 17 23:27:42.648347 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Apr 17 23:27:42.661214 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 17 23:27:42.664774 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 17 23:27:42.665788 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 17 23:27:42.666036 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 17 23:27:42.669370 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 17 23:27:42.669661 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 17 23:27:42.678738 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 17 23:27:42.678839 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 17 23:27:42.685857 ignition[1013]: INFO : Ignition 2.19.0 Apr 17 23:27:42.687668 ignition[1013]: INFO : Stage: umount Apr 17 23:27:42.687668 ignition[1013]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 17 23:27:42.687668 ignition[1013]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 17 23:27:42.693333 ignition[1013]: INFO : umount: umount passed Apr 17 23:27:42.693333 ignition[1013]: INFO : Ignition finished successfully Apr 17 23:27:42.691364 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 17 23:27:42.693778 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 17 23:27:42.693967 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 17 23:27:42.696960 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 17 23:27:42.697069 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 17 23:27:42.698312 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 17 23:27:42.698361 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 17 23:27:42.701268 systemd[1]: ignition-fetch.service: Deactivated successfully. Apr 17 23:27:42.701322 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Apr 17 23:27:42.702608 systemd[1]: Stopped target network.target - Network. Apr 17 23:27:42.706178 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 17 23:27:42.706250 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 17 23:27:42.707663 systemd[1]: Stopped target paths.target - Path Units. Apr 17 23:27:42.713786 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 17 23:27:42.717519 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 17 23:27:42.720927 systemd[1]: Stopped target slices.target - Slice Units. Apr 17 23:27:42.722497 systemd[1]: Stopped target sockets.target - Socket Units. Apr 17 23:27:42.724359 systemd[1]: iscsid.socket: Deactivated successfully. Apr 17 23:27:42.724407 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 17 23:27:42.726140 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 17 23:27:42.726182 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 17 23:27:42.728551 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 17 23:27:42.728614 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 17 23:27:42.729545 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 17 23:27:42.729601 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 17 23:27:42.730506 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 17 23:27:42.731285 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 17 23:27:42.734695 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 17 23:27:42.734824 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 17 23:27:42.736414 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 17 23:27:42.736727 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 17 23:27:42.740391 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 17 23:27:42.740554 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 17 23:27:42.741737 systemd-networkd[780]: eth0: DHCPv6 lease lost Apr 17 23:27:42.743978 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 17 23:27:42.744043 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 17 23:27:42.746538 systemd-networkd[780]: eth1: DHCPv6 lease lost Apr 17 23:27:42.749352 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 17 23:27:42.749530 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 17 23:27:42.751213 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 17 23:27:42.751248 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 17 23:27:42.758864 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 17 23:27:42.759419 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 17 23:27:42.759519 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 17 23:27:42.760432 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 17 23:27:42.760527 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 17 23:27:42.762120 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 17 23:27:42.762163 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 17 23:27:42.764983 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 17 23:27:42.783792 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 17 23:27:42.784138 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 17 23:27:42.788255 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 17 23:27:42.788592 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 17 23:27:42.790148 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 17 23:27:42.790182 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 17 23:27:42.791284 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 17 23:27:42.791337 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 17 23:27:42.792920 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 17 23:27:42.792970 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 17 23:27:42.794615 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 17 23:27:42.794663 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 17 23:27:42.811555 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 17 23:27:42.814208 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 17 23:27:42.814374 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 17 23:27:42.821691 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Apr 17 23:27:42.821789 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 17 23:27:42.824643 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 17 23:27:42.824716 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 17 23:27:42.827357 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 17 23:27:42.827425 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 17 23:27:42.830166 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 17 23:27:42.830291 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 17 23:27:42.832114 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 17 23:27:42.832215 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 17 23:27:42.835855 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 17 23:27:42.842813 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 17 23:27:42.852435 systemd[1]: Switching root. Apr 17 23:27:42.882154 systemd-journald[237]: Journal stopped Apr 17 23:27:43.774053 systemd-journald[237]: Received SIGTERM from PID 1 (systemd). Apr 17 23:27:43.774121 kernel: SELinux: policy capability network_peer_controls=1 Apr 17 23:27:43.774134 kernel: SELinux: policy capability open_perms=1 Apr 17 23:27:43.774144 kernel: SELinux: policy capability extended_socket_class=1 Apr 17 23:27:43.774153 kernel: SELinux: policy capability always_check_network=0 Apr 17 23:27:43.774166 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 17 23:27:43.774176 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 17 23:27:43.774185 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 17 23:27:43.774203 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 17 23:27:43.774213 systemd[1]: Successfully loaded SELinux policy in 35.247ms. Apr 17 23:27:43.774236 kernel: audit: type=1403 audit(1776468463.007:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Apr 17 23:27:43.774247 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 11.484ms. Apr 17 23:27:43.774259 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 17 23:27:43.774269 systemd[1]: Detected virtualization kvm. Apr 17 23:27:43.774281 systemd[1]: Detected architecture arm64. Apr 17 23:27:43.774292 systemd[1]: Detected first boot. Apr 17 23:27:43.774307 systemd[1]: Hostname set to . Apr 17 23:27:43.774319 systemd[1]: Initializing machine ID from VM UUID. Apr 17 23:27:43.774330 zram_generator::config[1055]: No configuration found. Apr 17 23:27:43.774341 systemd[1]: Populated /etc with preset unit settings. Apr 17 23:27:43.774351 systemd[1]: initrd-switch-root.service: Deactivated successfully. Apr 17 23:27:43.774361 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Apr 17 23:27:43.774373 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Apr 17 23:27:43.774384 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Apr 17 23:27:43.774399 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Apr 17 23:27:43.774410 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Apr 17 23:27:43.774420 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Apr 17 23:27:43.774430 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Apr 17 23:27:43.774440 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Apr 17 23:27:43.774504 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Apr 17 23:27:43.774519 systemd[1]: Created slice user.slice - User and Session Slice. Apr 17 23:27:43.774530 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 17 23:27:43.774541 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 17 23:27:43.774551 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Apr 17 23:27:43.774561 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Apr 17 23:27:43.774572 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Apr 17 23:27:43.774582 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 17 23:27:43.774592 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Apr 17 23:27:43.774604 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 17 23:27:43.774616 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Apr 17 23:27:43.774627 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Apr 17 23:27:43.774637 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Apr 17 23:27:43.774648 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Apr 17 23:27:43.774658 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 17 23:27:43.774668 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 17 23:27:43.774680 systemd[1]: Reached target slices.target - Slice Units. Apr 17 23:27:43.774690 systemd[1]: Reached target swap.target - Swaps. Apr 17 23:27:43.774700 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Apr 17 23:27:43.774710 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Apr 17 23:27:43.774721 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 17 23:27:43.774731 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 17 23:27:43.774742 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 17 23:27:43.774753 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Apr 17 23:27:43.774763 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Apr 17 23:27:43.774780 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Apr 17 23:27:43.774792 systemd[1]: Mounting media.mount - External Media Directory... Apr 17 23:27:43.774802 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Apr 17 23:27:43.774812 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Apr 17 23:27:43.774823 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Apr 17 23:27:43.774835 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 17 23:27:43.774845 systemd[1]: Reached target machines.target - Containers. Apr 17 23:27:43.774855 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Apr 17 23:27:43.774894 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 17 23:27:43.774910 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 17 23:27:43.774923 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Apr 17 23:27:43.774935 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 17 23:27:43.774948 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 17 23:27:43.774958 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 17 23:27:43.774970 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Apr 17 23:27:43.774981 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 17 23:27:43.774992 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 17 23:27:43.775003 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Apr 17 23:27:43.775013 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Apr 17 23:27:43.775023 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Apr 17 23:27:43.775033 systemd[1]: Stopped systemd-fsck-usr.service. Apr 17 23:27:43.775043 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 17 23:27:43.775054 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 17 23:27:43.775065 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 17 23:27:43.775076 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Apr 17 23:27:43.775086 kernel: loop: module loaded Apr 17 23:27:43.775097 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 17 23:27:43.775108 systemd[1]: verity-setup.service: Deactivated successfully. Apr 17 23:27:43.775123 systemd[1]: Stopped verity-setup.service. Apr 17 23:27:43.775133 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Apr 17 23:27:43.775143 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Apr 17 23:27:43.775155 systemd[1]: Mounted media.mount - External Media Directory. Apr 17 23:27:43.775165 kernel: ACPI: bus type drm_connector registered Apr 17 23:27:43.775175 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Apr 17 23:27:43.775210 systemd-journald[1125]: Collecting audit messages is disabled. Apr 17 23:27:43.775233 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Apr 17 23:27:43.775246 kernel: fuse: init (API version 7.39) Apr 17 23:27:43.775256 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Apr 17 23:27:43.775267 systemd-journald[1125]: Journal started Apr 17 23:27:43.775288 systemd-journald[1125]: Runtime Journal (/run/log/journal/df094abc10b94a6bac8389ab4f3e7bc5) is 8.0M, max 76.6M, 68.6M free. Apr 17 23:27:43.515366 systemd[1]: Queued start job for default target multi-user.target. Apr 17 23:27:43.531736 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Apr 17 23:27:43.532434 systemd[1]: systemd-journald.service: Deactivated successfully. Apr 17 23:27:43.779711 systemd[1]: Started systemd-journald.service - Journal Service. Apr 17 23:27:43.779710 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 17 23:27:43.780909 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 17 23:27:43.783317 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Apr 17 23:27:43.785426 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 17 23:27:43.785652 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 17 23:27:43.786784 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 17 23:27:43.788107 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 17 23:27:43.789275 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 17 23:27:43.789410 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 17 23:27:43.791955 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 17 23:27:43.792095 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Apr 17 23:27:43.793244 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 17 23:27:43.793380 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 17 23:27:43.796486 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 17 23:27:43.797435 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 17 23:27:43.799658 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Apr 17 23:27:43.812001 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Apr 17 23:27:43.821091 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 17 23:27:43.826689 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Apr 17 23:27:43.833224 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Apr 17 23:27:43.834032 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 17 23:27:43.834069 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 17 23:27:43.836255 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Apr 17 23:27:43.847808 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Apr 17 23:27:43.854554 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Apr 17 23:27:43.855345 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 17 23:27:43.865743 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Apr 17 23:27:43.869157 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Apr 17 23:27:43.871617 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 17 23:27:43.872810 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Apr 17 23:27:43.873495 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 17 23:27:43.877811 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 17 23:27:43.882655 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Apr 17 23:27:43.888466 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 17 23:27:43.891109 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Apr 17 23:27:43.893686 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Apr 17 23:27:43.894724 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Apr 17 23:27:43.914479 kernel: loop0: detected capacity change from 0 to 114328 Apr 17 23:27:43.922091 systemd-journald[1125]: Time spent on flushing to /var/log/journal/df094abc10b94a6bac8389ab4f3e7bc5 is 107.641ms for 1125 entries. Apr 17 23:27:43.922091 systemd-journald[1125]: System Journal (/var/log/journal/df094abc10b94a6bac8389ab4f3e7bc5) is 8.0M, max 584.8M, 576.8M free. Apr 17 23:27:44.056080 systemd-journald[1125]: Received client request to flush runtime journal. Apr 17 23:27:44.056147 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Apr 17 23:27:44.056734 kernel: loop1: detected capacity change from 0 to 114432 Apr 17 23:27:43.923131 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 17 23:27:43.929939 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Apr 17 23:27:43.941946 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Apr 17 23:27:43.944103 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Apr 17 23:27:43.957129 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Apr 17 23:27:43.996204 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 17 23:27:44.008171 udevadm[1177]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Apr 17 23:27:44.012242 systemd-tmpfiles[1170]: ACLs are not supported, ignoring. Apr 17 23:27:44.012253 systemd-tmpfiles[1170]: ACLs are not supported, ignoring. Apr 17 23:27:44.023837 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 17 23:27:44.028003 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Apr 17 23:27:44.034976 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 17 23:27:44.047346 systemd[1]: Starting systemd-sysusers.service - Create System Users... Apr 17 23:27:44.060543 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Apr 17 23:27:44.095096 kernel: loop2: detected capacity change from 0 to 8 Apr 17 23:27:44.100231 systemd[1]: Finished systemd-sysusers.service - Create System Users. Apr 17 23:27:44.109009 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 17 23:27:44.122468 kernel: loop3: detected capacity change from 0 to 200864 Apr 17 23:27:44.137105 systemd-tmpfiles[1193]: ACLs are not supported, ignoring. Apr 17 23:27:44.137123 systemd-tmpfiles[1193]: ACLs are not supported, ignoring. Apr 17 23:27:44.142265 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 17 23:27:44.163122 kernel: loop4: detected capacity change from 0 to 114328 Apr 17 23:27:44.176517 kernel: loop5: detected capacity change from 0 to 114432 Apr 17 23:27:44.189238 kernel: loop6: detected capacity change from 0 to 8 Apr 17 23:27:44.190474 kernel: loop7: detected capacity change from 0 to 200864 Apr 17 23:27:44.213563 (sd-merge)[1197]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. Apr 17 23:27:44.214429 (sd-merge)[1197]: Merged extensions into '/usr'. Apr 17 23:27:44.228599 systemd[1]: Reloading requested from client PID 1169 ('systemd-sysext') (unit systemd-sysext.service)... Apr 17 23:27:44.228618 systemd[1]: Reloading... Apr 17 23:27:44.367736 zram_generator::config[1223]: No configuration found. Apr 17 23:27:44.413476 ldconfig[1164]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 17 23:27:44.485187 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 17 23:27:44.537825 systemd[1]: Reloading finished in 308 ms. Apr 17 23:27:44.570800 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Apr 17 23:27:44.574877 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Apr 17 23:27:44.581807 systemd[1]: Starting ensure-sysext.service... Apr 17 23:27:44.592735 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 17 23:27:44.605890 systemd[1]: Reloading requested from client PID 1260 ('systemctl') (unit ensure-sysext.service)... Apr 17 23:27:44.605907 systemd[1]: Reloading... Apr 17 23:27:44.626713 systemd-tmpfiles[1261]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 17 23:27:44.627007 systemd-tmpfiles[1261]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Apr 17 23:27:44.627729 systemd-tmpfiles[1261]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 17 23:27:44.627988 systemd-tmpfiles[1261]: ACLs are not supported, ignoring. Apr 17 23:27:44.628040 systemd-tmpfiles[1261]: ACLs are not supported, ignoring. Apr 17 23:27:44.635046 systemd-tmpfiles[1261]: Detected autofs mount point /boot during canonicalization of boot. Apr 17 23:27:44.635058 systemd-tmpfiles[1261]: Skipping /boot Apr 17 23:27:44.648537 systemd-tmpfiles[1261]: Detected autofs mount point /boot during canonicalization of boot. Apr 17 23:27:44.648554 systemd-tmpfiles[1261]: Skipping /boot Apr 17 23:27:44.698494 zram_generator::config[1291]: No configuration found. Apr 17 23:27:44.797743 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 17 23:27:44.849820 systemd[1]: Reloading finished in 243 ms. Apr 17 23:27:44.872050 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Apr 17 23:27:44.888511 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 17 23:27:44.904059 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 17 23:27:44.909787 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Apr 17 23:27:44.914586 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Apr 17 23:27:44.918944 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 17 23:27:44.925781 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 17 23:27:44.930769 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Apr 17 23:27:44.934979 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 17 23:27:44.943488 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 17 23:27:44.949761 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 17 23:27:44.954956 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 17 23:27:44.957716 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 17 23:27:44.960492 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Apr 17 23:27:44.968712 systemd[1]: Starting systemd-update-done.service - Update is Completed... Apr 17 23:27:44.974440 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Apr 17 23:27:44.985275 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 17 23:27:44.986571 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 17 23:27:44.994427 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 17 23:27:44.998979 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 17 23:27:45.000353 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 17 23:27:45.001175 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 17 23:27:45.001964 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 17 23:27:45.006484 systemd[1]: Finished systemd-update-done.service - Update is Completed. Apr 17 23:27:45.009218 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Apr 17 23:27:45.011097 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 17 23:27:45.013556 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 17 23:27:45.025220 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 17 23:27:45.025395 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 17 23:27:45.032367 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Apr 17 23:27:45.041100 systemd[1]: Finished ensure-sysext.service. Apr 17 23:27:45.042972 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 17 23:27:45.043148 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 17 23:27:45.046528 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 17 23:27:45.053816 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 17 23:27:45.056670 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 17 23:27:45.060224 systemd-udevd[1336]: Using default interface naming scheme 'v255'. Apr 17 23:27:45.061432 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 17 23:27:45.062233 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 17 23:27:45.071666 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Apr 17 23:27:45.072733 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 17 23:27:45.073237 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 17 23:27:45.074486 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 17 23:27:45.083410 augenrules[1369]: No rules Apr 17 23:27:45.089174 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 17 23:27:45.091920 systemd[1]: Started systemd-userdbd.service - User Database Manager. Apr 17 23:27:45.096056 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 17 23:27:45.097538 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 17 23:27:45.099911 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 17 23:27:45.100527 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 17 23:27:45.104011 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 17 23:27:45.104132 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 17 23:27:45.110625 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 17 23:27:45.120754 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 17 23:27:45.173623 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Apr 17 23:27:45.174624 systemd[1]: Reached target time-set.target - System Time Set. Apr 17 23:27:45.214383 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Apr 17 23:27:45.239098 systemd-resolved[1334]: Positive Trust Anchors: Apr 17 23:27:45.239122 systemd-resolved[1334]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 17 23:27:45.239155 systemd-resolved[1334]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 17 23:27:45.245060 systemd-resolved[1334]: Using system hostname 'ci-4081-3-6-n-6417c65d59'. Apr 17 23:27:45.245752 systemd-networkd[1386]: lo: Link UP Apr 17 23:27:45.246093 systemd-networkd[1386]: lo: Gained carrier Apr 17 23:27:45.247238 systemd-networkd[1386]: Enumeration completed Apr 17 23:27:45.247279 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 17 23:27:45.248204 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 17 23:27:45.249685 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 17 23:27:45.251018 systemd[1]: Reached target network.target - Network. Apr 17 23:27:45.265810 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Apr 17 23:27:45.328193 systemd-networkd[1386]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 17 23:27:45.328204 systemd-networkd[1386]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 17 23:27:45.329126 systemd-networkd[1386]: eth1: Link UP Apr 17 23:27:45.329232 systemd-networkd[1386]: eth1: Gained carrier Apr 17 23:27:45.329304 systemd-networkd[1386]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 17 23:27:45.337312 systemd-networkd[1386]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 17 23:27:45.337323 systemd-networkd[1386]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 17 23:27:45.338236 systemd-networkd[1386]: eth0: Link UP Apr 17 23:27:45.338356 systemd-networkd[1386]: eth0: Gained carrier Apr 17 23:27:45.338418 systemd-networkd[1386]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 17 23:27:45.365819 systemd-networkd[1386]: eth1: DHCPv4 address 10.0.0.3/32 acquired from 10.0.0.1 Apr 17 23:27:45.366833 systemd-timesyncd[1366]: Network configuration changed, trying to establish connection. Apr 17 23:27:45.375517 kernel: mousedev: PS/2 mouse device common for all mice Apr 17 23:27:45.381578 systemd[1]: Condition check resulted in dev-virtio\x2dports-org.qemu.guest_agent.0.device - /dev/virtio-ports/org.qemu.guest_agent.0 being skipped. Apr 17 23:27:45.381714 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 17 23:27:45.391666 systemd-networkd[1386]: eth0: DHCPv4 address 159.69.127.159/32, gateway 172.31.1.1 acquired from 172.31.1.1 Apr 17 23:27:45.393098 systemd-timesyncd[1366]: Network configuration changed, trying to establish connection. Apr 17 23:27:45.414270 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 32 scanned by (udev-worker) (1387) Apr 17 23:27:45.412731 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 17 23:27:45.415661 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 17 23:27:45.429632 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 17 23:27:45.430396 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 17 23:27:45.430430 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 17 23:27:45.430833 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 17 23:27:45.432510 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 17 23:27:45.437580 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 17 23:27:45.439421 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 17 23:27:45.442985 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 17 23:27:45.443148 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 17 23:27:45.450152 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 17 23:27:45.450208 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 17 23:27:45.469519 kernel: [drm] pci: virtio-gpu-pci detected at 0000:00:01.0 Apr 17 23:27:45.469594 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Apr 17 23:27:45.469638 kernel: [drm] features: -context_init Apr 17 23:27:45.476282 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Apr 17 23:27:45.479721 kernel: [drm] number of scanouts: 1 Apr 17 23:27:45.479821 kernel: [drm] number of cap sets: 0 Apr 17 23:27:45.493832 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Apr 17 23:27:45.518474 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:01.0 on minor 0 Apr 17 23:27:45.519917 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 17 23:27:45.524143 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Apr 17 23:27:45.528567 kernel: Console: switching to colour frame buffer device 160x50 Apr 17 23:27:45.542492 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Apr 17 23:27:45.549520 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 17 23:27:45.549915 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 17 23:27:45.558933 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 17 23:27:45.622909 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 17 23:27:45.668174 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Apr 17 23:27:45.682869 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Apr 17 23:27:45.699884 lvm[1449]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 17 23:27:45.726741 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Apr 17 23:27:45.731660 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 17 23:27:45.732385 systemd[1]: Reached target sysinit.target - System Initialization. Apr 17 23:27:45.733284 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Apr 17 23:27:45.734187 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Apr 17 23:27:45.735230 systemd[1]: Started logrotate.timer - Daily rotation of log files. Apr 17 23:27:45.736112 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Apr 17 23:27:45.736969 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Apr 17 23:27:45.737797 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 17 23:27:45.737835 systemd[1]: Reached target paths.target - Path Units. Apr 17 23:27:45.738396 systemd[1]: Reached target timers.target - Timer Units. Apr 17 23:27:45.740039 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Apr 17 23:27:45.742251 systemd[1]: Starting docker.socket - Docker Socket for the API... Apr 17 23:27:45.751923 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Apr 17 23:27:45.754827 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Apr 17 23:27:45.756349 systemd[1]: Listening on docker.socket - Docker Socket for the API. Apr 17 23:27:45.757336 systemd[1]: Reached target sockets.target - Socket Units. Apr 17 23:27:45.758120 systemd[1]: Reached target basic.target - Basic System. Apr 17 23:27:45.758801 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Apr 17 23:27:45.758834 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Apr 17 23:27:45.764625 systemd[1]: Starting containerd.service - containerd container runtime... Apr 17 23:27:45.768726 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Apr 17 23:27:45.772579 lvm[1453]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 17 23:27:45.774761 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Apr 17 23:27:45.780529 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Apr 17 23:27:45.784691 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Apr 17 23:27:45.785722 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Apr 17 23:27:45.789666 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Apr 17 23:27:45.794573 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Apr 17 23:27:45.797504 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. Apr 17 23:27:45.804709 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Apr 17 23:27:45.809819 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Apr 17 23:27:45.815340 systemd[1]: Starting systemd-logind.service - User Login Management... Apr 17 23:27:45.816964 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Apr 17 23:27:45.817487 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Apr 17 23:27:45.820674 systemd[1]: Starting update-engine.service - Update Engine... Apr 17 23:27:45.827254 jq[1457]: false Apr 17 23:27:45.826217 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Apr 17 23:27:45.844200 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 17 23:27:45.847573 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Apr 17 23:27:45.857956 jq[1468]: true Apr 17 23:27:45.864079 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Apr 17 23:27:45.867173 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 17 23:27:45.867349 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Apr 17 23:27:45.878729 extend-filesystems[1458]: Found loop4 Apr 17 23:27:45.890278 extend-filesystems[1458]: Found loop5 Apr 17 23:27:45.890278 extend-filesystems[1458]: Found loop6 Apr 17 23:27:45.890278 extend-filesystems[1458]: Found loop7 Apr 17 23:27:45.890278 extend-filesystems[1458]: Found sda Apr 17 23:27:45.890278 extend-filesystems[1458]: Found sda1 Apr 17 23:27:45.890278 extend-filesystems[1458]: Found sda2 Apr 17 23:27:45.890278 extend-filesystems[1458]: Found sda3 Apr 17 23:27:45.890278 extend-filesystems[1458]: Found usr Apr 17 23:27:45.890278 extend-filesystems[1458]: Found sda4 Apr 17 23:27:45.890278 extend-filesystems[1458]: Found sda6 Apr 17 23:27:45.890278 extend-filesystems[1458]: Found sda7 Apr 17 23:27:45.890278 extend-filesystems[1458]: Found sda9 Apr 17 23:27:45.890278 extend-filesystems[1458]: Checking size of /dev/sda9 Apr 17 23:27:45.959173 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 9393147 blocks Apr 17 23:27:45.890977 dbus-daemon[1456]: [system] SELinux support is enabled Apr 17 23:27:45.962872 coreos-metadata[1455]: Apr 17 23:27:45.929 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 Apr 17 23:27:45.962872 coreos-metadata[1455]: Apr 17 23:27:45.939 INFO Fetch successful Apr 17 23:27:45.962872 coreos-metadata[1455]: Apr 17 23:27:45.939 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 Apr 17 23:27:45.962872 coreos-metadata[1455]: Apr 17 23:27:45.946 INFO Fetch successful Apr 17 23:27:45.963023 tar[1472]: linux-arm64/LICENSE Apr 17 23:27:45.963023 tar[1472]: linux-arm64/helm Apr 17 23:27:45.893789 systemd[1]: Started dbus.service - D-Bus System Message Bus. Apr 17 23:27:45.963280 extend-filesystems[1458]: Resized partition /dev/sda9 Apr 17 23:27:45.913734 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 17 23:27:45.973050 extend-filesystems[1498]: resize2fs 1.47.1 (20-May-2024) Apr 17 23:27:45.983496 jq[1476]: true Apr 17 23:27:45.913783 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Apr 17 23:27:45.918464 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 17 23:27:45.918493 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Apr 17 23:27:45.924044 (ntainerd)[1486]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Apr 17 23:27:45.963075 systemd[1]: motdgen.service: Deactivated successfully. Apr 17 23:27:45.963254 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Apr 17 23:27:46.002140 systemd-logind[1466]: New seat seat0. Apr 17 23:27:46.015170 systemd-logind[1466]: Watching system buttons on /dev/input/event0 (Power Button) Apr 17 23:27:46.015189 systemd-logind[1466]: Watching system buttons on /dev/input/event2 (QEMU QEMU USB Keyboard) Apr 17 23:27:46.015559 systemd[1]: Started systemd-logind.service - User Login Management. Apr 17 23:27:46.021469 update_engine[1467]: I20260417 23:27:46.009339 1467 main.cc:92] Flatcar Update Engine starting Apr 17 23:27:46.029736 systemd[1]: Started update-engine.service - Update Engine. Apr 17 23:27:46.033302 update_engine[1467]: I20260417 23:27:46.031672 1467 update_check_scheduler.cc:74] Next update check in 6m56s Apr 17 23:27:46.037796 systemd[1]: Started locksmithd.service - Cluster reboot manager. Apr 17 23:27:46.060746 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 32 scanned by (udev-worker) (1394) Apr 17 23:27:46.126379 bash[1519]: Updated "/home/core/.ssh/authorized_keys" Apr 17 23:27:46.136523 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Apr 17 23:27:46.155670 systemd[1]: Starting sshkeys.service... Apr 17 23:27:46.158933 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Apr 17 23:27:46.163576 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Apr 17 23:27:46.193571 kernel: EXT4-fs (sda9): resized filesystem to 9393147 Apr 17 23:27:46.194786 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Apr 17 23:27:46.215919 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Apr 17 23:27:46.242219 extend-filesystems[1498]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Apr 17 23:27:46.242219 extend-filesystems[1498]: old_desc_blocks = 1, new_desc_blocks = 5 Apr 17 23:27:46.242219 extend-filesystems[1498]: The filesystem on /dev/sda9 is now 9393147 (4k) blocks long. Apr 17 23:27:46.249833 extend-filesystems[1458]: Resized filesystem in /dev/sda9 Apr 17 23:27:46.249833 extend-filesystems[1458]: Found sr0 Apr 17 23:27:46.255353 coreos-metadata[1537]: Apr 17 23:27:46.249 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 Apr 17 23:27:46.255353 coreos-metadata[1537]: Apr 17 23:27:46.251 INFO Fetch successful Apr 17 23:27:46.247725 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 17 23:27:46.247934 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Apr 17 23:27:46.254538 unknown[1537]: wrote ssh authorized keys file for user: core Apr 17 23:27:46.282412 update-ssh-keys[1542]: Updated "/home/core/.ssh/authorized_keys" Apr 17 23:27:46.285512 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Apr 17 23:27:46.291723 systemd[1]: Finished sshkeys.service. Apr 17 23:27:46.328025 locksmithd[1520]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 17 23:27:46.352152 containerd[1486]: time="2026-04-17T23:27:46.352053360Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Apr 17 23:27:46.416530 containerd[1486]: time="2026-04-17T23:27:46.416409760Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Apr 17 23:27:46.420165 containerd[1486]: time="2026-04-17T23:27:46.420106120Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Apr 17 23:27:46.420165 containerd[1486]: time="2026-04-17T23:27:46.420156400Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Apr 17 23:27:46.420165 containerd[1486]: time="2026-04-17T23:27:46.420174840Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Apr 17 23:27:46.420362 containerd[1486]: time="2026-04-17T23:27:46.420341320Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Apr 17 23:27:46.420393 containerd[1486]: time="2026-04-17T23:27:46.420363440Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Apr 17 23:27:46.420445 containerd[1486]: time="2026-04-17T23:27:46.420428960Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Apr 17 23:27:46.420489 containerd[1486]: time="2026-04-17T23:27:46.420444320Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Apr 17 23:27:46.421666 containerd[1486]: time="2026-04-17T23:27:46.421626720Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 17 23:27:46.421666 containerd[1486]: time="2026-04-17T23:27:46.421662280Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Apr 17 23:27:46.421763 containerd[1486]: time="2026-04-17T23:27:46.421684280Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Apr 17 23:27:46.421763 containerd[1486]: time="2026-04-17T23:27:46.421695160Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Apr 17 23:27:46.421797 containerd[1486]: time="2026-04-17T23:27:46.421784200Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Apr 17 23:27:46.422022 containerd[1486]: time="2026-04-17T23:27:46.421998920Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Apr 17 23:27:46.422166 containerd[1486]: time="2026-04-17T23:27:46.422145640Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 17 23:27:46.422192 containerd[1486]: time="2026-04-17T23:27:46.422165280Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Apr 17 23:27:46.422266 containerd[1486]: time="2026-04-17T23:27:46.422250280Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Apr 17 23:27:46.422306 containerd[1486]: time="2026-04-17T23:27:46.422294160Z" level=info msg="metadata content store policy set" policy=shared Apr 17 23:27:46.429724 containerd[1486]: time="2026-04-17T23:27:46.429500000Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Apr 17 23:27:46.429724 containerd[1486]: time="2026-04-17T23:27:46.429566760Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Apr 17 23:27:46.429724 containerd[1486]: time="2026-04-17T23:27:46.429582680Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Apr 17 23:27:46.429724 containerd[1486]: time="2026-04-17T23:27:46.429610000Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Apr 17 23:27:46.429724 containerd[1486]: time="2026-04-17T23:27:46.429683400Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Apr 17 23:27:46.430141 containerd[1486]: time="2026-04-17T23:27:46.429902080Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Apr 17 23:27:46.431671 containerd[1486]: time="2026-04-17T23:27:46.431643400Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Apr 17 23:27:46.431910 containerd[1486]: time="2026-04-17T23:27:46.431793600Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Apr 17 23:27:46.431910 containerd[1486]: time="2026-04-17T23:27:46.431815960Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Apr 17 23:27:46.431910 containerd[1486]: time="2026-04-17T23:27:46.431830720Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Apr 17 23:27:46.431910 containerd[1486]: time="2026-04-17T23:27:46.431858600Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Apr 17 23:27:46.431910 containerd[1486]: time="2026-04-17T23:27:46.431874480Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Apr 17 23:27:46.431910 containerd[1486]: time="2026-04-17T23:27:46.431896160Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Apr 17 23:27:46.431910 containerd[1486]: time="2026-04-17T23:27:46.431913200Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Apr 17 23:27:46.432152 containerd[1486]: time="2026-04-17T23:27:46.431928440Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Apr 17 23:27:46.432152 containerd[1486]: time="2026-04-17T23:27:46.431941640Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Apr 17 23:27:46.432152 containerd[1486]: time="2026-04-17T23:27:46.431956440Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Apr 17 23:27:46.432152 containerd[1486]: time="2026-04-17T23:27:46.431968240Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Apr 17 23:27:46.432152 containerd[1486]: time="2026-04-17T23:27:46.431989000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Apr 17 23:27:46.432152 containerd[1486]: time="2026-04-17T23:27:46.432003000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Apr 17 23:27:46.432152 containerd[1486]: time="2026-04-17T23:27:46.432015240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Apr 17 23:27:46.432152 containerd[1486]: time="2026-04-17T23:27:46.432028600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Apr 17 23:27:46.432152 containerd[1486]: time="2026-04-17T23:27:46.432040400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Apr 17 23:27:46.432152 containerd[1486]: time="2026-04-17T23:27:46.432052600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Apr 17 23:27:46.432152 containerd[1486]: time="2026-04-17T23:27:46.432065160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Apr 17 23:27:46.432152 containerd[1486]: time="2026-04-17T23:27:46.432078920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Apr 17 23:27:46.432152 containerd[1486]: time="2026-04-17T23:27:46.432091120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Apr 17 23:27:46.432152 containerd[1486]: time="2026-04-17T23:27:46.432109600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Apr 17 23:27:46.433003 containerd[1486]: time="2026-04-17T23:27:46.432122040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Apr 17 23:27:46.433003 containerd[1486]: time="2026-04-17T23:27:46.432136360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Apr 17 23:27:46.433003 containerd[1486]: time="2026-04-17T23:27:46.432148720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Apr 17 23:27:46.433003 containerd[1486]: time="2026-04-17T23:27:46.432164440Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Apr 17 23:27:46.433003 containerd[1486]: time="2026-04-17T23:27:46.432184760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Apr 17 23:27:46.433003 containerd[1486]: time="2026-04-17T23:27:46.432197200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Apr 17 23:27:46.433003 containerd[1486]: time="2026-04-17T23:27:46.432214960Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Apr 17 23:27:46.433003 containerd[1486]: time="2026-04-17T23:27:46.432332040Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Apr 17 23:27:46.433003 containerd[1486]: time="2026-04-17T23:27:46.432352120Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Apr 17 23:27:46.433003 containerd[1486]: time="2026-04-17T23:27:46.432363320Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Apr 17 23:27:46.433003 containerd[1486]: time="2026-04-17T23:27:46.432377120Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Apr 17 23:27:46.433003 containerd[1486]: time="2026-04-17T23:27:46.432528120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Apr 17 23:27:46.433003 containerd[1486]: time="2026-04-17T23:27:46.432549040Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Apr 17 23:27:46.433003 containerd[1486]: time="2026-04-17T23:27:46.432559360Z" level=info msg="NRI interface is disabled by configuration." Apr 17 23:27:46.433651 containerd[1486]: time="2026-04-17T23:27:46.432571920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Apr 17 23:27:46.433674 containerd[1486]: time="2026-04-17T23:27:46.432880640Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Apr 17 23:27:46.433674 containerd[1486]: time="2026-04-17T23:27:46.432944440Z" level=info msg="Connect containerd service" Apr 17 23:27:46.433674 containerd[1486]: time="2026-04-17T23:27:46.432974120Z" level=info msg="using legacy CRI server" Apr 17 23:27:46.433674 containerd[1486]: time="2026-04-17T23:27:46.432981240Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Apr 17 23:27:46.433674 containerd[1486]: time="2026-04-17T23:27:46.433061760Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Apr 17 23:27:46.436639 containerd[1486]: time="2026-04-17T23:27:46.435817240Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 17 23:27:46.436639 containerd[1486]: time="2026-04-17T23:27:46.436066800Z" level=info msg="Start subscribing containerd event" Apr 17 23:27:46.436639 containerd[1486]: time="2026-04-17T23:27:46.436122880Z" level=info msg="Start recovering state" Apr 17 23:27:46.436639 containerd[1486]: time="2026-04-17T23:27:46.436195160Z" level=info msg="Start event monitor" Apr 17 23:27:46.436639 containerd[1486]: time="2026-04-17T23:27:46.436209880Z" level=info msg="Start snapshots syncer" Apr 17 23:27:46.436639 containerd[1486]: time="2026-04-17T23:27:46.436219120Z" level=info msg="Start cni network conf syncer for default" Apr 17 23:27:46.436639 containerd[1486]: time="2026-04-17T23:27:46.436226680Z" level=info msg="Start streaming server" Apr 17 23:27:46.438697 containerd[1486]: time="2026-04-17T23:27:46.438660880Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 17 23:27:46.438762 containerd[1486]: time="2026-04-17T23:27:46.438715920Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 17 23:27:46.440806 containerd[1486]: time="2026-04-17T23:27:46.438781960Z" level=info msg="containerd successfully booted in 0.090889s" Apr 17 23:27:46.438932 systemd[1]: Started containerd.service - containerd container runtime. Apr 17 23:27:46.645513 tar[1472]: linux-arm64/README.md Apr 17 23:27:46.659495 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Apr 17 23:27:46.877651 systemd-networkd[1386]: eth1: Gained IPv6LL Apr 17 23:27:46.878216 systemd-timesyncd[1366]: Network configuration changed, trying to establish connection. Apr 17 23:27:46.882694 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Apr 17 23:27:46.886267 systemd[1]: Reached target network-online.target - Network is Online. Apr 17 23:27:46.893565 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 23:27:46.900549 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Apr 17 23:27:46.949535 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Apr 17 23:27:47.005641 systemd-networkd[1386]: eth0: Gained IPv6LL Apr 17 23:27:47.006097 systemd-timesyncd[1366]: Network configuration changed, trying to establish connection. Apr 17 23:27:47.518169 sshd_keygen[1490]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 17 23:27:47.547514 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Apr 17 23:27:47.555199 systemd[1]: Starting issuegen.service - Generate /run/issue... Apr 17 23:27:47.569535 systemd[1]: issuegen.service: Deactivated successfully. Apr 17 23:27:47.569895 systemd[1]: Finished issuegen.service - Generate /run/issue. Apr 17 23:27:47.584677 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Apr 17 23:27:47.597581 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Apr 17 23:27:47.608984 systemd[1]: Started getty@tty1.service - Getty on tty1. Apr 17 23:27:47.616717 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Apr 17 23:27:47.618714 systemd[1]: Reached target getty.target - Login Prompts. Apr 17 23:27:47.725539 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 23:27:47.727373 systemd[1]: Reached target multi-user.target - Multi-User System. Apr 17 23:27:47.731693 systemd[1]: Startup finished in 793ms (kernel) + 5.316s (initrd) + 4.759s (userspace) = 10.869s. Apr 17 23:27:47.732258 (kubelet)[1587]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 17 23:27:48.167260 kubelet[1587]: E0417 23:27:48.167150 1587 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 17 23:27:48.172055 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 17 23:27:48.172217 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 17 23:27:50.671067 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Apr 17 23:27:50.685983 systemd[1]: Started sshd@0-159.69.127.159:22-50.85.169.122:42692.service - OpenSSH per-connection server daemon (50.85.169.122:42692). Apr 17 23:27:50.810390 sshd[1599]: Accepted publickey for core from 50.85.169.122 port 42692 ssh2: RSA SHA256:VfypDX1RTsDok1DcKRgqFkknflSVDpDNB07R6ghJc68 Apr 17 23:27:50.813117 sshd[1599]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:27:50.822649 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Apr 17 23:27:50.828046 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Apr 17 23:27:50.832582 systemd-logind[1466]: New session 1 of user core. Apr 17 23:27:50.845898 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Apr 17 23:27:50.858919 systemd[1]: Starting user@500.service - User Manager for UID 500... Apr 17 23:27:50.863764 (systemd)[1603]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Apr 17 23:27:50.965855 systemd[1603]: Queued start job for default target default.target. Apr 17 23:27:50.973741 systemd[1603]: Created slice app.slice - User Application Slice. Apr 17 23:27:50.973801 systemd[1603]: Reached target paths.target - Paths. Apr 17 23:27:50.973893 systemd[1603]: Reached target timers.target - Timers. Apr 17 23:27:50.976312 systemd[1603]: Starting dbus.socket - D-Bus User Message Bus Socket... Apr 17 23:27:51.004771 systemd[1603]: Listening on dbus.socket - D-Bus User Message Bus Socket. Apr 17 23:27:51.004870 systemd[1603]: Reached target sockets.target - Sockets. Apr 17 23:27:51.004885 systemd[1603]: Reached target basic.target - Basic System. Apr 17 23:27:51.004931 systemd[1603]: Reached target default.target - Main User Target. Apr 17 23:27:51.004960 systemd[1603]: Startup finished in 134ms. Apr 17 23:27:51.005554 systemd[1]: Started user@500.service - User Manager for UID 500. Apr 17 23:27:51.016242 systemd[1]: Started session-1.scope - Session 1 of User core. Apr 17 23:27:51.142030 systemd[1]: Started sshd@1-159.69.127.159:22-50.85.169.122:42694.service - OpenSSH per-connection server daemon (50.85.169.122:42694). Apr 17 23:27:51.267635 sshd[1614]: Accepted publickey for core from 50.85.169.122 port 42694 ssh2: RSA SHA256:VfypDX1RTsDok1DcKRgqFkknflSVDpDNB07R6ghJc68 Apr 17 23:27:51.270314 sshd[1614]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:27:51.277544 systemd-logind[1466]: New session 2 of user core. Apr 17 23:27:51.283858 systemd[1]: Started session-2.scope - Session 2 of User core. Apr 17 23:27:51.388013 sshd[1614]: pam_unix(sshd:session): session closed for user core Apr 17 23:27:51.392873 systemd[1]: sshd@1-159.69.127.159:22-50.85.169.122:42694.service: Deactivated successfully. Apr 17 23:27:51.394978 systemd[1]: session-2.scope: Deactivated successfully. Apr 17 23:27:51.396769 systemd-logind[1466]: Session 2 logged out. Waiting for processes to exit. Apr 17 23:27:51.398180 systemd-logind[1466]: Removed session 2. Apr 17 23:27:51.432729 systemd[1]: Started sshd@2-159.69.127.159:22-50.85.169.122:42696.service - OpenSSH per-connection server daemon (50.85.169.122:42696). Apr 17 23:27:51.557047 sshd[1621]: Accepted publickey for core from 50.85.169.122 port 42696 ssh2: RSA SHA256:VfypDX1RTsDok1DcKRgqFkknflSVDpDNB07R6ghJc68 Apr 17 23:27:51.560222 sshd[1621]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:27:51.567072 systemd-logind[1466]: New session 3 of user core. Apr 17 23:27:51.574164 systemd[1]: Started session-3.scope - Session 3 of User core. Apr 17 23:27:51.676284 sshd[1621]: pam_unix(sshd:session): session closed for user core Apr 17 23:27:51.680879 systemd[1]: sshd@2-159.69.127.159:22-50.85.169.122:42696.service: Deactivated successfully. Apr 17 23:27:51.683171 systemd[1]: session-3.scope: Deactivated successfully. Apr 17 23:27:51.685259 systemd-logind[1466]: Session 3 logged out. Waiting for processes to exit. Apr 17 23:27:51.686483 systemd-logind[1466]: Removed session 3. Apr 17 23:27:51.710900 systemd[1]: Started sshd@3-159.69.127.159:22-50.85.169.122:42708.service - OpenSSH per-connection server daemon (50.85.169.122:42708). Apr 17 23:27:51.835227 sshd[1628]: Accepted publickey for core from 50.85.169.122 port 42708 ssh2: RSA SHA256:VfypDX1RTsDok1DcKRgqFkknflSVDpDNB07R6ghJc68 Apr 17 23:27:51.841030 sshd[1628]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:27:51.846203 systemd-logind[1466]: New session 4 of user core. Apr 17 23:27:51.853880 systemd[1]: Started session-4.scope - Session 4 of User core. Apr 17 23:27:51.959430 sshd[1628]: pam_unix(sshd:session): session closed for user core Apr 17 23:27:51.964690 systemd[1]: sshd@3-159.69.127.159:22-50.85.169.122:42708.service: Deactivated successfully. Apr 17 23:27:51.967209 systemd[1]: session-4.scope: Deactivated successfully. Apr 17 23:27:51.969677 systemd-logind[1466]: Session 4 logged out. Waiting for processes to exit. Apr 17 23:27:51.971333 systemd-logind[1466]: Removed session 4. Apr 17 23:27:51.987350 systemd[1]: Started sshd@4-159.69.127.159:22-50.85.169.122:42716.service - OpenSSH per-connection server daemon (50.85.169.122:42716). Apr 17 23:27:52.113116 sshd[1635]: Accepted publickey for core from 50.85.169.122 port 42716 ssh2: RSA SHA256:VfypDX1RTsDok1DcKRgqFkknflSVDpDNB07R6ghJc68 Apr 17 23:27:52.115599 sshd[1635]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:27:52.121236 systemd-logind[1466]: New session 5 of user core. Apr 17 23:27:52.129980 systemd[1]: Started session-5.scope - Session 5 of User core. Apr 17 23:27:52.234842 sudo[1638]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Apr 17 23:27:52.235177 sudo[1638]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 17 23:27:52.250005 sudo[1638]: pam_unix(sudo:session): session closed for user root Apr 17 23:27:52.268174 sshd[1635]: pam_unix(sshd:session): session closed for user core Apr 17 23:27:52.273555 systemd[1]: sshd@4-159.69.127.159:22-50.85.169.122:42716.service: Deactivated successfully. Apr 17 23:27:52.275982 systemd[1]: session-5.scope: Deactivated successfully. Apr 17 23:27:52.280074 systemd-logind[1466]: Session 5 logged out. Waiting for processes to exit. Apr 17 23:27:52.281217 systemd-logind[1466]: Removed session 5. Apr 17 23:27:52.301834 systemd[1]: Started sshd@5-159.69.127.159:22-50.85.169.122:42726.service - OpenSSH per-connection server daemon (50.85.169.122:42726). Apr 17 23:27:52.437113 sshd[1643]: Accepted publickey for core from 50.85.169.122 port 42726 ssh2: RSA SHA256:VfypDX1RTsDok1DcKRgqFkknflSVDpDNB07R6ghJc68 Apr 17 23:27:52.439294 sshd[1643]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:27:52.445547 systemd-logind[1466]: New session 6 of user core. Apr 17 23:27:52.450084 systemd[1]: Started session-6.scope - Session 6 of User core. Apr 17 23:27:52.537814 sudo[1647]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Apr 17 23:27:52.539335 sudo[1647]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 17 23:27:52.545569 sudo[1647]: pam_unix(sudo:session): session closed for user root Apr 17 23:27:52.552386 sudo[1646]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Apr 17 23:27:52.553426 sudo[1646]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 17 23:27:52.578291 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Apr 17 23:27:52.584568 auditctl[1650]: No rules Apr 17 23:27:52.582480 systemd[1]: audit-rules.service: Deactivated successfully. Apr 17 23:27:52.582673 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Apr 17 23:27:52.587653 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 17 23:27:52.631947 augenrules[1668]: No rules Apr 17 23:27:52.636349 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 17 23:27:52.638048 sudo[1646]: pam_unix(sudo:session): session closed for user root Apr 17 23:27:52.655138 sshd[1643]: pam_unix(sshd:session): session closed for user core Apr 17 23:27:52.662580 systemd[1]: sshd@5-159.69.127.159:22-50.85.169.122:42726.service: Deactivated successfully. Apr 17 23:27:52.665231 systemd[1]: session-6.scope: Deactivated successfully. Apr 17 23:27:52.666516 systemd-logind[1466]: Session 6 logged out. Waiting for processes to exit. Apr 17 23:27:52.668556 systemd-logind[1466]: Removed session 6. Apr 17 23:27:52.686357 systemd[1]: Started sshd@6-159.69.127.159:22-50.85.169.122:42742.service - OpenSSH per-connection server daemon (50.85.169.122:42742). Apr 17 23:27:52.824341 sshd[1676]: Accepted publickey for core from 50.85.169.122 port 42742 ssh2: RSA SHA256:VfypDX1RTsDok1DcKRgqFkknflSVDpDNB07R6ghJc68 Apr 17 23:27:52.825614 sshd[1676]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:27:52.832012 systemd-logind[1466]: New session 7 of user core. Apr 17 23:27:52.838869 systemd[1]: Started session-7.scope - Session 7 of User core. Apr 17 23:27:52.926408 sudo[1679]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 17 23:27:52.926721 sudo[1679]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 17 23:27:53.242999 (dockerd)[1694]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Apr 17 23:27:53.244087 systemd[1]: Starting docker.service - Docker Application Container Engine... Apr 17 23:27:53.498583 dockerd[1694]: time="2026-04-17T23:27:53.498318720Z" level=info msg="Starting up" Apr 17 23:27:53.587191 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport3186201202-merged.mount: Deactivated successfully. Apr 17 23:27:53.606936 dockerd[1694]: time="2026-04-17T23:27:53.606873080Z" level=info msg="Loading containers: start." Apr 17 23:27:53.729823 kernel: Initializing XFRM netlink socket Apr 17 23:27:53.753621 systemd-timesyncd[1366]: Network configuration changed, trying to establish connection. Apr 17 23:27:53.755188 systemd-timesyncd[1366]: Network configuration changed, trying to establish connection. Apr 17 23:27:53.768530 systemd-timesyncd[1366]: Network configuration changed, trying to establish connection. Apr 17 23:27:53.818557 systemd-networkd[1386]: docker0: Link UP Apr 17 23:27:53.818917 systemd-timesyncd[1366]: Network configuration changed, trying to establish connection. Apr 17 23:27:53.836503 dockerd[1694]: time="2026-04-17T23:27:53.836398000Z" level=info msg="Loading containers: done." Apr 17 23:27:53.853538 dockerd[1694]: time="2026-04-17T23:27:53.853358280Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 17 23:27:53.853763 dockerd[1694]: time="2026-04-17T23:27:53.853614080Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Apr 17 23:27:53.853849 dockerd[1694]: time="2026-04-17T23:27:53.853776960Z" level=info msg="Daemon has completed initialization" Apr 17 23:27:53.912169 dockerd[1694]: time="2026-04-17T23:27:53.912028760Z" level=info msg="API listen on /run/docker.sock" Apr 17 23:27:53.914094 systemd[1]: Started docker.service - Docker Application Container Engine. Apr 17 23:27:54.402700 containerd[1486]: time="2026-04-17T23:27:54.402655880Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.7\"" Apr 17 23:27:54.989721 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3617460211.mount: Deactivated successfully. Apr 17 23:27:56.004461 containerd[1486]: time="2026-04-17T23:27:56.003598200Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:27:56.007167 containerd[1486]: time="2026-04-17T23:27:56.007103960Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.7: active requests=0, bytes read=24193866" Apr 17 23:27:56.009898 containerd[1486]: time="2026-04-17T23:27:56.008592480Z" level=info msg="ImageCreate event name:\"sha256:bf3fdee5548e267fd53c67a79d712e896d47f48203512415518d59da7f985228\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:27:56.016386 containerd[1486]: time="2026-04-17T23:27:56.015648920Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.7\" with image id \"sha256:bf3fdee5548e267fd53c67a79d712e896d47f48203512415518d59da7f985228\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.7\", repo digest \"registry.k8s.io/kube-apiserver@sha256:b96b8464d152a24c81d7f0435fd2198f8486970cd26a9e0e9c20826c73d1441c\", size \"24190367\" in 1.6129454s" Apr 17 23:27:56.016386 containerd[1486]: time="2026-04-17T23:27:56.015703800Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.7\" returns image reference \"sha256:bf3fdee5548e267fd53c67a79d712e896d47f48203512415518d59da7f985228\"" Apr 17 23:27:56.016713 containerd[1486]: time="2026-04-17T23:27:56.016345760Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:b96b8464d152a24c81d7f0435fd2198f8486970cd26a9e0e9c20826c73d1441c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:27:56.017648 containerd[1486]: time="2026-04-17T23:27:56.017009760Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.7\"" Apr 17 23:27:57.142271 containerd[1486]: time="2026-04-17T23:27:57.142218240Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:27:57.144978 containerd[1486]: time="2026-04-17T23:27:57.144943240Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.7: active requests=0, bytes read=18901464" Apr 17 23:27:57.146433 containerd[1486]: time="2026-04-17T23:27:57.146392480Z" level=info msg="ImageCreate event name:\"sha256:161b12aee2701d72b2e8a7d114f5f83122603d8c5d1d3cd7f72aa6fac5d9524c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:27:57.152248 containerd[1486]: time="2026-04-17T23:27:57.152185640Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:7d759bdc4fef10a3fc1ad60ce9439d58e1a4df7ebb22751f7cc0201ce55f280b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:27:57.153789 containerd[1486]: time="2026-04-17T23:27:57.153735880Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.7\" with image id \"sha256:161b12aee2701d72b2e8a7d114f5f83122603d8c5d1d3cd7f72aa6fac5d9524c\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.7\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:7d759bdc4fef10a3fc1ad60ce9439d58e1a4df7ebb22751f7cc0201ce55f280b\", size \"20408083\" in 1.13669552s" Apr 17 23:27:57.153922 containerd[1486]: time="2026-04-17T23:27:57.153902880Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.7\" returns image reference \"sha256:161b12aee2701d72b2e8a7d114f5f83122603d8c5d1d3cd7f72aa6fac5d9524c\"" Apr 17 23:27:57.155104 containerd[1486]: time="2026-04-17T23:27:57.155076840Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.7\"" Apr 17 23:27:58.043491 containerd[1486]: time="2026-04-17T23:27:58.042005800Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:27:58.044324 containerd[1486]: time="2026-04-17T23:27:58.044279520Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.7: active requests=0, bytes read=14047965" Apr 17 23:27:58.044604 containerd[1486]: time="2026-04-17T23:27:58.044568840Z" level=info msg="ImageCreate event name:\"sha256:85bc0b83d6779f309f0f2d8724ee225e2a061dc60b1b127f8a9b8843bad36e14\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:27:58.050405 containerd[1486]: time="2026-04-17T23:27:58.050357720Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:4ab32f707ff84beaac431797999707757b885196b0b9a52d29cb67f95efce7c1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:27:58.051925 containerd[1486]: time="2026-04-17T23:27:58.051880560Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.7\" with image id \"sha256:85bc0b83d6779f309f0f2d8724ee225e2a061dc60b1b127f8a9b8843bad36e14\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.7\", repo digest \"registry.k8s.io/kube-scheduler@sha256:4ab32f707ff84beaac431797999707757b885196b0b9a52d29cb67f95efce7c1\", size \"15554602\" in 896.76488ms" Apr 17 23:27:58.052069 containerd[1486]: time="2026-04-17T23:27:58.052053120Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.7\" returns image reference \"sha256:85bc0b83d6779f309f0f2d8724ee225e2a061dc60b1b127f8a9b8843bad36e14\"" Apr 17 23:27:58.052751 containerd[1486]: time="2026-04-17T23:27:58.052727600Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.7\"" Apr 17 23:27:58.423005 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 17 23:27:58.434950 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 23:27:58.609047 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 23:27:58.620352 (kubelet)[1911]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 17 23:27:58.693854 kubelet[1911]: E0417 23:27:58.693680 1911 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 17 23:27:58.698212 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 17 23:27:58.698347 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 17 23:27:58.969303 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1666399279.mount: Deactivated successfully. Apr 17 23:27:59.241655 containerd[1486]: time="2026-04-17T23:27:59.239499560Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:27:59.242308 containerd[1486]: time="2026-04-17T23:27:59.241907800Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.7: active requests=0, bytes read=22606312" Apr 17 23:27:59.243139 containerd[1486]: time="2026-04-17T23:27:59.243072560Z" level=info msg="ImageCreate event name:\"sha256:c63683691df94ddfb3e7b1449f68fd9df087b1bda7cdecd1e9292214f6adc745\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:27:59.248001 containerd[1486]: time="2026-04-17T23:27:59.247949800Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:062519bc0a14769e2f98c6bdff7816a17e6252de3f3c9cb102e6be33fe38d9e2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:27:59.248860 containerd[1486]: time="2026-04-17T23:27:59.248745680Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.7\" with image id \"sha256:c63683691df94ddfb3e7b1449f68fd9df087b1bda7cdecd1e9292214f6adc745\", repo tag \"registry.k8s.io/kube-proxy:v1.34.7\", repo digest \"registry.k8s.io/kube-proxy@sha256:062519bc0a14769e2f98c6bdff7816a17e6252de3f3c9cb102e6be33fe38d9e2\", size \"22605305\" in 1.19585008s" Apr 17 23:27:59.249007 containerd[1486]: time="2026-04-17T23:27:59.248986960Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.7\" returns image reference \"sha256:c63683691df94ddfb3e7b1449f68fd9df087b1bda7cdecd1e9292214f6adc745\"" Apr 17 23:27:59.249608 containerd[1486]: time="2026-04-17T23:27:59.249550040Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Apr 17 23:27:59.744339 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2765592343.mount: Deactivated successfully. Apr 17 23:28:00.632501 containerd[1486]: time="2026-04-17T23:28:00.632233960Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:28:00.636161 containerd[1486]: time="2026-04-17T23:28:00.636010800Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=20395498" Apr 17 23:28:00.637946 containerd[1486]: time="2026-04-17T23:28:00.637876200Z" level=info msg="ImageCreate event name:\"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:28:00.643299 containerd[1486]: time="2026-04-17T23:28:00.641432560Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:28:00.643299 containerd[1486]: time="2026-04-17T23:28:00.642868200Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"20392204\" in 1.39327048s" Apr 17 23:28:00.643299 containerd[1486]: time="2026-04-17T23:28:00.643027440Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc\"" Apr 17 23:28:00.643962 containerd[1486]: time="2026-04-17T23:28:00.643910120Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Apr 17 23:28:01.105336 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3204934952.mount: Deactivated successfully. Apr 17 23:28:01.115177 containerd[1486]: time="2026-04-17T23:28:01.115105040Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:28:01.116235 containerd[1486]: time="2026-04-17T23:28:01.116198760Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=268729" Apr 17 23:28:01.117409 containerd[1486]: time="2026-04-17T23:28:01.117074840Z" level=info msg="ImageCreate event name:\"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:28:01.120283 containerd[1486]: time="2026-04-17T23:28:01.120246320Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:28:01.121518 containerd[1486]: time="2026-04-17T23:28:01.121486640Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"267939\" in 477.53584ms" Apr 17 23:28:01.121642 containerd[1486]: time="2026-04-17T23:28:01.121626640Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\"" Apr 17 23:28:01.122476 containerd[1486]: time="2026-04-17T23:28:01.122188320Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.5-0\"" Apr 17 23:28:01.623119 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1129839923.mount: Deactivated successfully. Apr 17 23:28:02.387491 containerd[1486]: time="2026-04-17T23:28:02.385638480Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.5-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:28:02.388786 containerd[1486]: time="2026-04-17T23:28:02.388663000Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.5-0: active requests=0, bytes read=21139756" Apr 17 23:28:02.389529 containerd[1486]: time="2026-04-17T23:28:02.389440680Z" level=info msg="ImageCreate event name:\"sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:28:02.394401 containerd[1486]: time="2026-04-17T23:28:02.394335960Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:28:02.396223 containerd[1486]: time="2026-04-17T23:28:02.396179160Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.5-0\" with image id \"sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42\", repo tag \"registry.k8s.io/etcd:3.6.5-0\", repo digest \"registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534\", size \"21136588\" in 1.27395832s" Apr 17 23:28:02.396694 containerd[1486]: time="2026-04-17T23:28:02.396388160Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.5-0\" returns image reference \"sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42\"" Apr 17 23:28:07.818237 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 23:28:07.826982 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 23:28:07.862003 systemd[1]: Reloading requested from client PID 2072 ('systemctl') (unit session-7.scope)... Apr 17 23:28:07.862025 systemd[1]: Reloading... Apr 17 23:28:07.998474 zram_generator::config[2124]: No configuration found. Apr 17 23:28:08.075822 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 17 23:28:08.154851 systemd[1]: Reloading finished in 292 ms. Apr 17 23:28:08.216819 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Apr 17 23:28:08.217080 systemd[1]: kubelet.service: Failed with result 'signal'. Apr 17 23:28:08.218618 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 23:28:08.225895 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 23:28:08.367802 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 23:28:08.369952 (kubelet)[2159]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 17 23:28:08.419929 kubelet[2159]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 17 23:28:08.420415 kubelet[2159]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 17 23:28:08.421605 kubelet[2159]: I0417 23:28:08.421550 2159 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 17 23:28:09.254533 kubelet[2159]: I0417 23:28:09.254441 2159 server.go:529] "Kubelet version" kubeletVersion="v1.34.4" Apr 17 23:28:09.254533 kubelet[2159]: I0417 23:28:09.254521 2159 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 17 23:28:09.254722 kubelet[2159]: I0417 23:28:09.254567 2159 watchdog_linux.go:95] "Systemd watchdog is not enabled" Apr 17 23:28:09.254722 kubelet[2159]: I0417 23:28:09.254579 2159 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 17 23:28:09.255312 kubelet[2159]: I0417 23:28:09.255249 2159 server.go:956] "Client rotation is on, will bootstrap in background" Apr 17 23:28:09.264824 kubelet[2159]: E0417 23:28:09.264782 2159 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://159.69.127.159:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 159.69.127.159:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 17 23:28:09.265487 kubelet[2159]: I0417 23:28:09.265105 2159 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 17 23:28:09.271474 kubelet[2159]: E0417 23:28:09.271400 2159 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 17 23:28:09.271731 kubelet[2159]: I0417 23:28:09.271711 2159 server.go:1400] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Apr 17 23:28:09.274849 kubelet[2159]: I0417 23:28:09.274801 2159 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Apr 17 23:28:09.275082 kubelet[2159]: I0417 23:28:09.275045 2159 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 17 23:28:09.276983 kubelet[2159]: I0417 23:28:09.275076 2159 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-6-n-6417c65d59","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 17 23:28:09.276983 kubelet[2159]: I0417 23:28:09.276990 2159 topology_manager.go:138] "Creating topology manager with none policy" Apr 17 23:28:09.277158 kubelet[2159]: I0417 23:28:09.277004 2159 container_manager_linux.go:306] "Creating device plugin manager" Apr 17 23:28:09.277184 kubelet[2159]: I0417 23:28:09.277160 2159 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Apr 17 23:28:09.280626 kubelet[2159]: I0417 23:28:09.280588 2159 state_mem.go:36] "Initialized new in-memory state store" Apr 17 23:28:09.282661 kubelet[2159]: I0417 23:28:09.282635 2159 kubelet.go:475] "Attempting to sync node with API server" Apr 17 23:28:09.284563 kubelet[2159]: I0417 23:28:09.282668 2159 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 17 23:28:09.284563 kubelet[2159]: I0417 23:28:09.282713 2159 kubelet.go:387] "Adding apiserver pod source" Apr 17 23:28:09.284563 kubelet[2159]: I0417 23:28:09.282729 2159 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 17 23:28:09.284846 kubelet[2159]: I0417 23:28:09.284824 2159 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 17 23:28:09.285557 kubelet[2159]: I0417 23:28:09.285530 2159 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 17 23:28:09.285661 kubelet[2159]: I0417 23:28:09.285651 2159 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Apr 17 23:28:09.285802 kubelet[2159]: W0417 23:28:09.285790 2159 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 17 23:28:09.287882 kubelet[2159]: E0417 23:28:09.287839 2159 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://159.69.127.159:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 159.69.127.159:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 17 23:28:09.288018 kubelet[2159]: E0417 23:28:09.287955 2159 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://159.69.127.159:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-6-n-6417c65d59&limit=500&resourceVersion=0\": dial tcp 159.69.127.159:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 17 23:28:09.291557 kubelet[2159]: I0417 23:28:09.291527 2159 server.go:1262] "Started kubelet" Apr 17 23:28:09.295808 kubelet[2159]: I0417 23:28:09.295765 2159 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 17 23:28:09.296309 kubelet[2159]: I0417 23:28:09.296241 2159 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 17 23:28:09.296357 kubelet[2159]: I0417 23:28:09.296324 2159 server_v1.go:49] "podresources" method="list" useActivePods=true Apr 17 23:28:09.296736 kubelet[2159]: I0417 23:28:09.296661 2159 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 17 23:28:09.297179 kubelet[2159]: I0417 23:28:09.297159 2159 server.go:310] "Adding debug handlers to kubelet server" Apr 17 23:28:09.299584 kubelet[2159]: I0417 23:28:09.299559 2159 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 17 23:28:09.303298 kubelet[2159]: E0417 23:28:09.300581 2159 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://159.69.127.159:6443/api/v1/namespaces/default/events\": dial tcp 159.69.127.159:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-3-6-n-6417c65d59.18a748adcae4dc90 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-6-n-6417c65d59,UID:ci-4081-3-6-n-6417c65d59,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-3-6-n-6417c65d59,},FirstTimestamp:2026-04-17 23:28:09.2914884 +0000 UTC m=+0.916586641,LastTimestamp:2026-04-17 23:28:09.2914884 +0000 UTC m=+0.916586641,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-6-n-6417c65d59,}" Apr 17 23:28:09.308545 kubelet[2159]: E0417 23:28:09.308439 2159 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 17 23:28:09.308935 kubelet[2159]: I0417 23:28:09.308904 2159 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 17 23:28:09.312155 kubelet[2159]: E0417 23:28:09.312119 2159 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4081-3-6-n-6417c65d59\" not found" Apr 17 23:28:09.312155 kubelet[2159]: I0417 23:28:09.312161 2159 volume_manager.go:313] "Starting Kubelet Volume Manager" Apr 17 23:28:09.312841 kubelet[2159]: I0417 23:28:09.312425 2159 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Apr 17 23:28:09.312924 kubelet[2159]: I0417 23:28:09.312903 2159 reconciler.go:29] "Reconciler: start to sync state" Apr 17 23:28:09.313862 kubelet[2159]: I0417 23:28:09.313830 2159 factory.go:223] Registration of the systemd container factory successfully Apr 17 23:28:09.313948 kubelet[2159]: I0417 23:28:09.313936 2159 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 17 23:28:09.315311 kubelet[2159]: E0417 23:28:09.315263 2159 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://159.69.127.159:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 159.69.127.159:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 17 23:28:09.316374 kubelet[2159]: E0417 23:28:09.316315 2159 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://159.69.127.159:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-6-n-6417c65d59?timeout=10s\": dial tcp 159.69.127.159:6443: connect: connection refused" interval="200ms" Apr 17 23:28:09.318254 kubelet[2159]: I0417 23:28:09.317197 2159 factory.go:223] Registration of the containerd container factory successfully Apr 17 23:28:09.333776 kubelet[2159]: I0417 23:28:09.333660 2159 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Apr 17 23:28:09.335264 kubelet[2159]: I0417 23:28:09.335235 2159 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Apr 17 23:28:09.335400 kubelet[2159]: I0417 23:28:09.335388 2159 status_manager.go:244] "Starting to sync pod status with apiserver" Apr 17 23:28:09.335530 kubelet[2159]: I0417 23:28:09.335515 2159 kubelet.go:2428] "Starting kubelet main sync loop" Apr 17 23:28:09.335671 kubelet[2159]: E0417 23:28:09.335647 2159 kubelet.go:2452] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 17 23:28:09.343160 kubelet[2159]: E0417 23:28:09.343097 2159 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://159.69.127.159:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 159.69.127.159:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 17 23:28:09.353735 kubelet[2159]: I0417 23:28:09.353674 2159 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 17 23:28:09.353735 kubelet[2159]: I0417 23:28:09.353724 2159 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 17 23:28:09.353903 kubelet[2159]: I0417 23:28:09.353751 2159 state_mem.go:36] "Initialized new in-memory state store" Apr 17 23:28:09.356176 kubelet[2159]: I0417 23:28:09.356146 2159 policy_none.go:49] "None policy: Start" Apr 17 23:28:09.356176 kubelet[2159]: I0417 23:28:09.356177 2159 memory_manager.go:187] "Starting memorymanager" policy="None" Apr 17 23:28:09.356327 kubelet[2159]: I0417 23:28:09.356192 2159 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Apr 17 23:28:09.360553 kubelet[2159]: I0417 23:28:09.360492 2159 policy_none.go:47] "Start" Apr 17 23:28:09.366425 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Apr 17 23:28:09.379753 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Apr 17 23:28:09.387566 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Apr 17 23:28:09.397517 kubelet[2159]: E0417 23:28:09.395775 2159 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 17 23:28:09.397517 kubelet[2159]: I0417 23:28:09.395996 2159 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 17 23:28:09.397517 kubelet[2159]: I0417 23:28:09.396007 2159 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 17 23:28:09.397517 kubelet[2159]: I0417 23:28:09.396662 2159 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 17 23:28:09.398492 kubelet[2159]: E0417 23:28:09.398274 2159 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 17 23:28:09.398492 kubelet[2159]: E0417 23:28:09.398338 2159 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081-3-6-n-6417c65d59\" not found" Apr 17 23:28:09.453367 systemd[1]: Created slice kubepods-burstable-pod97aca4857ce9d5f7e69afd4065b4364e.slice - libcontainer container kubepods-burstable-pod97aca4857ce9d5f7e69afd4065b4364e.slice. Apr 17 23:28:09.464090 kubelet[2159]: E0417 23:28:09.464027 2159 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-6417c65d59\" not found" node="ci-4081-3-6-n-6417c65d59" Apr 17 23:28:09.470358 systemd[1]: Created slice kubepods-burstable-pode28221849c9a32cefc1c9b90887a5866.slice - libcontainer container kubepods-burstable-pode28221849c9a32cefc1c9b90887a5866.slice. Apr 17 23:28:09.473246 kubelet[2159]: E0417 23:28:09.473207 2159 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-6417c65d59\" not found" node="ci-4081-3-6-n-6417c65d59" Apr 17 23:28:09.487552 systemd[1]: Created slice kubepods-burstable-pod7df473cb844c589f5cfa9716bc90bc99.slice - libcontainer container kubepods-burstable-pod7df473cb844c589f5cfa9716bc90bc99.slice. Apr 17 23:28:09.490200 kubelet[2159]: E0417 23:28:09.490160 2159 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-6417c65d59\" not found" node="ci-4081-3-6-n-6417c65d59" Apr 17 23:28:09.500300 kubelet[2159]: I0417 23:28:09.500196 2159 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-6-n-6417c65d59" Apr 17 23:28:09.500980 kubelet[2159]: E0417 23:28:09.500938 2159 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://159.69.127.159:6443/api/v1/nodes\": dial tcp 159.69.127.159:6443: connect: connection refused" node="ci-4081-3-6-n-6417c65d59" Apr 17 23:28:09.513879 kubelet[2159]: I0417 23:28:09.513658 2159 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e28221849c9a32cefc1c9b90887a5866-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-6-n-6417c65d59\" (UID: \"e28221849c9a32cefc1c9b90887a5866\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-6417c65d59" Apr 17 23:28:09.513879 kubelet[2159]: I0417 23:28:09.513779 2159 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7df473cb844c589f5cfa9716bc90bc99-kubeconfig\") pod \"kube-scheduler-ci-4081-3-6-n-6417c65d59\" (UID: \"7df473cb844c589f5cfa9716bc90bc99\") " pod="kube-system/kube-scheduler-ci-4081-3-6-n-6417c65d59" Apr 17 23:28:09.513879 kubelet[2159]: I0417 23:28:09.513821 2159 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/97aca4857ce9d5f7e69afd4065b4364e-ca-certs\") pod \"kube-apiserver-ci-4081-3-6-n-6417c65d59\" (UID: \"97aca4857ce9d5f7e69afd4065b4364e\") " pod="kube-system/kube-apiserver-ci-4081-3-6-n-6417c65d59" Apr 17 23:28:09.513879 kubelet[2159]: I0417 23:28:09.513850 2159 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/97aca4857ce9d5f7e69afd4065b4364e-k8s-certs\") pod \"kube-apiserver-ci-4081-3-6-n-6417c65d59\" (UID: \"97aca4857ce9d5f7e69afd4065b4364e\") " pod="kube-system/kube-apiserver-ci-4081-3-6-n-6417c65d59" Apr 17 23:28:09.514175 kubelet[2159]: I0417 23:28:09.513898 2159 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e28221849c9a32cefc1c9b90887a5866-ca-certs\") pod \"kube-controller-manager-ci-4081-3-6-n-6417c65d59\" (UID: \"e28221849c9a32cefc1c9b90887a5866\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-6417c65d59" Apr 17 23:28:09.514175 kubelet[2159]: I0417 23:28:09.513939 2159 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e28221849c9a32cefc1c9b90887a5866-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-6-n-6417c65d59\" (UID: \"e28221849c9a32cefc1c9b90887a5866\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-6417c65d59" Apr 17 23:28:09.514175 kubelet[2159]: I0417 23:28:09.513990 2159 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e28221849c9a32cefc1c9b90887a5866-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-6-n-6417c65d59\" (UID: \"e28221849c9a32cefc1c9b90887a5866\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-6417c65d59" Apr 17 23:28:09.514175 kubelet[2159]: I0417 23:28:09.514023 2159 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/97aca4857ce9d5f7e69afd4065b4364e-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-6-n-6417c65d59\" (UID: \"97aca4857ce9d5f7e69afd4065b4364e\") " pod="kube-system/kube-apiserver-ci-4081-3-6-n-6417c65d59" Apr 17 23:28:09.514175 kubelet[2159]: I0417 23:28:09.514057 2159 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e28221849c9a32cefc1c9b90887a5866-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-6-n-6417c65d59\" (UID: \"e28221849c9a32cefc1c9b90887a5866\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-6417c65d59" Apr 17 23:28:09.518200 kubelet[2159]: E0417 23:28:09.517894 2159 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://159.69.127.159:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-6-n-6417c65d59?timeout=10s\": dial tcp 159.69.127.159:6443: connect: connection refused" interval="400ms" Apr 17 23:28:09.704758 kubelet[2159]: I0417 23:28:09.704723 2159 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-6-n-6417c65d59" Apr 17 23:28:09.705196 kubelet[2159]: E0417 23:28:09.705160 2159 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://159.69.127.159:6443/api/v1/nodes\": dial tcp 159.69.127.159:6443: connect: connection refused" node="ci-4081-3-6-n-6417c65d59" Apr 17 23:28:09.770060 containerd[1486]: time="2026-04-17T23:28:09.769928400Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-6-n-6417c65d59,Uid:97aca4857ce9d5f7e69afd4065b4364e,Namespace:kube-system,Attempt:0,}" Apr 17 23:28:09.776343 containerd[1486]: time="2026-04-17T23:28:09.776302720Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-6-n-6417c65d59,Uid:e28221849c9a32cefc1c9b90887a5866,Namespace:kube-system,Attempt:0,}" Apr 17 23:28:09.798266 containerd[1486]: time="2026-04-17T23:28:09.798224680Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-6-n-6417c65d59,Uid:7df473cb844c589f5cfa9716bc90bc99,Namespace:kube-system,Attempt:0,}" Apr 17 23:28:09.919410 kubelet[2159]: E0417 23:28:09.919296 2159 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://159.69.127.159:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-6-n-6417c65d59?timeout=10s\": dial tcp 159.69.127.159:6443: connect: connection refused" interval="800ms" Apr 17 23:28:10.107601 kubelet[2159]: I0417 23:28:10.107406 2159 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-6-n-6417c65d59" Apr 17 23:28:10.108307 kubelet[2159]: E0417 23:28:10.108245 2159 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://159.69.127.159:6443/api/v1/nodes\": dial tcp 159.69.127.159:6443: connect: connection refused" node="ci-4081-3-6-n-6417c65d59" Apr 17 23:28:10.175616 kubelet[2159]: E0417 23:28:10.175427 2159 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://159.69.127.159:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 159.69.127.159:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 17 23:28:10.277467 kubelet[2159]: E0417 23:28:10.276958 2159 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://159.69.127.159:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 159.69.127.159:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 17 23:28:10.288168 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3644290076.mount: Deactivated successfully. Apr 17 23:28:10.299741 containerd[1486]: time="2026-04-17T23:28:10.299660760Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 17 23:28:10.303038 containerd[1486]: time="2026-04-17T23:28:10.302792520Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 17 23:28:10.304336 containerd[1486]: time="2026-04-17T23:28:10.304293840Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 17 23:28:10.307406 containerd[1486]: time="2026-04-17T23:28:10.306728240Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 17 23:28:10.308254 containerd[1486]: time="2026-04-17T23:28:10.308220760Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 17 23:28:10.309261 containerd[1486]: time="2026-04-17T23:28:10.309235080Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269193" Apr 17 23:28:10.309938 containerd[1486]: time="2026-04-17T23:28:10.309907640Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 17 23:28:10.313911 containerd[1486]: time="2026-04-17T23:28:10.313544800Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 17 23:28:10.315240 containerd[1486]: time="2026-04-17T23:28:10.315200680Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 545.16788ms" Apr 17 23:28:10.318176 containerd[1486]: time="2026-04-17T23:28:10.317818560Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 519.49588ms" Apr 17 23:28:10.318586 containerd[1486]: time="2026-04-17T23:28:10.318552320Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 542.16408ms" Apr 17 23:28:10.396551 kubelet[2159]: E0417 23:28:10.396362 2159 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://159.69.127.159:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-6-n-6417c65d59&limit=500&resourceVersion=0\": dial tcp 159.69.127.159:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 17 23:28:10.451019 containerd[1486]: time="2026-04-17T23:28:10.450778080Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:28:10.451019 containerd[1486]: time="2026-04-17T23:28:10.450928120Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:28:10.451019 containerd[1486]: time="2026-04-17T23:28:10.450944920Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:28:10.451578 containerd[1486]: time="2026-04-17T23:28:10.451536040Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:28:10.455735 containerd[1486]: time="2026-04-17T23:28:10.455324920Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:28:10.455735 containerd[1486]: time="2026-04-17T23:28:10.455391720Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:28:10.455735 containerd[1486]: time="2026-04-17T23:28:10.455412120Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:28:10.455735 containerd[1486]: time="2026-04-17T23:28:10.455579960Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:28:10.456503 containerd[1486]: time="2026-04-17T23:28:10.456141880Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:28:10.456503 containerd[1486]: time="2026-04-17T23:28:10.456199640Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:28:10.456503 containerd[1486]: time="2026-04-17T23:28:10.456214640Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:28:10.456503 containerd[1486]: time="2026-04-17T23:28:10.456297960Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:28:10.486722 systemd[1]: Started cri-containerd-6a744d73c96a8486765210d81440e2b11a23f286d6ac9256fc653390e7f524ff.scope - libcontainer container 6a744d73c96a8486765210d81440e2b11a23f286d6ac9256fc653390e7f524ff. Apr 17 23:28:10.489765 systemd[1]: Started cri-containerd-be2fe18e5ce16ba64af0f1b717622c2e81a86e371ecd3abb948f7aee5f80a0f6.scope - libcontainer container be2fe18e5ce16ba64af0f1b717622c2e81a86e371ecd3abb948f7aee5f80a0f6. Apr 17 23:28:10.492476 systemd[1]: Started cri-containerd-d7282fad8fbb29b7d864cc800979416c9e636a22b9fa6414c235ae868d769e6b.scope - libcontainer container d7282fad8fbb29b7d864cc800979416c9e636a22b9fa6414c235ae868d769e6b. Apr 17 23:28:10.562089 containerd[1486]: time="2026-04-17T23:28:10.561969440Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-6-n-6417c65d59,Uid:7df473cb844c589f5cfa9716bc90bc99,Namespace:kube-system,Attempt:0,} returns sandbox id \"d7282fad8fbb29b7d864cc800979416c9e636a22b9fa6414c235ae868d769e6b\"" Apr 17 23:28:10.571268 containerd[1486]: time="2026-04-17T23:28:10.571216200Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-6-n-6417c65d59,Uid:e28221849c9a32cefc1c9b90887a5866,Namespace:kube-system,Attempt:0,} returns sandbox id \"6a744d73c96a8486765210d81440e2b11a23f286d6ac9256fc653390e7f524ff\"" Apr 17 23:28:10.577527 containerd[1486]: time="2026-04-17T23:28:10.577397040Z" level=info msg="CreateContainer within sandbox \"d7282fad8fbb29b7d864cc800979416c9e636a22b9fa6414c235ae868d769e6b\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Apr 17 23:28:10.579533 containerd[1486]: time="2026-04-17T23:28:10.579384040Z" level=info msg="CreateContainer within sandbox \"6a744d73c96a8486765210d81440e2b11a23f286d6ac9256fc653390e7f524ff\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Apr 17 23:28:10.593921 containerd[1486]: time="2026-04-17T23:28:10.593849360Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-6-n-6417c65d59,Uid:97aca4857ce9d5f7e69afd4065b4364e,Namespace:kube-system,Attempt:0,} returns sandbox id \"be2fe18e5ce16ba64af0f1b717622c2e81a86e371ecd3abb948f7aee5f80a0f6\"" Apr 17 23:28:10.597880 containerd[1486]: time="2026-04-17T23:28:10.597824920Z" level=info msg="CreateContainer within sandbox \"d7282fad8fbb29b7d864cc800979416c9e636a22b9fa6414c235ae868d769e6b\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"e6971f7007680f5a4d6e6c228180512d3390f3547fc6230a5c1e955fdfe6247e\"" Apr 17 23:28:10.599855 containerd[1486]: time="2026-04-17T23:28:10.599806720Z" level=info msg="StartContainer for \"e6971f7007680f5a4d6e6c228180512d3390f3547fc6230a5c1e955fdfe6247e\"" Apr 17 23:28:10.600620 containerd[1486]: time="2026-04-17T23:28:10.600525680Z" level=info msg="CreateContainer within sandbox \"be2fe18e5ce16ba64af0f1b717622c2e81a86e371ecd3abb948f7aee5f80a0f6\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Apr 17 23:28:10.620017 containerd[1486]: time="2026-04-17T23:28:10.619909520Z" level=info msg="CreateContainer within sandbox \"6a744d73c96a8486765210d81440e2b11a23f286d6ac9256fc653390e7f524ff\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"9b1575d46b76d0f2ac26a9a634640796de5952738932448a31980a6216ae03b3\"" Apr 17 23:28:10.620631 containerd[1486]: time="2026-04-17T23:28:10.620596880Z" level=info msg="StartContainer for \"9b1575d46b76d0f2ac26a9a634640796de5952738932448a31980a6216ae03b3\"" Apr 17 23:28:10.628892 containerd[1486]: time="2026-04-17T23:28:10.628207680Z" level=info msg="CreateContainer within sandbox \"be2fe18e5ce16ba64af0f1b717622c2e81a86e371ecd3abb948f7aee5f80a0f6\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"0fd0793a708c104be2623810885678ae0e3a3178eb754c66f37788a02b9ffde8\"" Apr 17 23:28:10.628892 containerd[1486]: time="2026-04-17T23:28:10.628623400Z" level=info msg="StartContainer for \"0fd0793a708c104be2623810885678ae0e3a3178eb754c66f37788a02b9ffde8\"" Apr 17 23:28:10.630806 systemd[1]: Started cri-containerd-e6971f7007680f5a4d6e6c228180512d3390f3547fc6230a5c1e955fdfe6247e.scope - libcontainer container e6971f7007680f5a4d6e6c228180512d3390f3547fc6230a5c1e955fdfe6247e. Apr 17 23:28:10.672775 systemd[1]: Started cri-containerd-9b1575d46b76d0f2ac26a9a634640796de5952738932448a31980a6216ae03b3.scope - libcontainer container 9b1575d46b76d0f2ac26a9a634640796de5952738932448a31980a6216ae03b3. Apr 17 23:28:10.688523 systemd[1]: Started cri-containerd-0fd0793a708c104be2623810885678ae0e3a3178eb754c66f37788a02b9ffde8.scope - libcontainer container 0fd0793a708c104be2623810885678ae0e3a3178eb754c66f37788a02b9ffde8. Apr 17 23:28:10.693546 containerd[1486]: time="2026-04-17T23:28:10.693397320Z" level=info msg="StartContainer for \"e6971f7007680f5a4d6e6c228180512d3390f3547fc6230a5c1e955fdfe6247e\" returns successfully" Apr 17 23:28:10.720382 kubelet[2159]: E0417 23:28:10.719871 2159 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://159.69.127.159:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-6-n-6417c65d59?timeout=10s\": dial tcp 159.69.127.159:6443: connect: connection refused" interval="1.6s" Apr 17 23:28:10.736378 containerd[1486]: time="2026-04-17T23:28:10.736197880Z" level=info msg="StartContainer for \"9b1575d46b76d0f2ac26a9a634640796de5952738932448a31980a6216ae03b3\" returns successfully" Apr 17 23:28:10.745911 containerd[1486]: time="2026-04-17T23:28:10.745861480Z" level=info msg="StartContainer for \"0fd0793a708c104be2623810885678ae0e3a3178eb754c66f37788a02b9ffde8\" returns successfully" Apr 17 23:28:10.910836 kubelet[2159]: I0417 23:28:10.910801 2159 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-6-n-6417c65d59" Apr 17 23:28:11.367405 kubelet[2159]: E0417 23:28:11.366999 2159 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-6417c65d59\" not found" node="ci-4081-3-6-n-6417c65d59" Apr 17 23:28:11.373165 kubelet[2159]: E0417 23:28:11.372986 2159 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-6417c65d59\" not found" node="ci-4081-3-6-n-6417c65d59" Apr 17 23:28:11.374617 kubelet[2159]: E0417 23:28:11.374360 2159 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-6417c65d59\" not found" node="ci-4081-3-6-n-6417c65d59" Apr 17 23:28:12.376840 kubelet[2159]: E0417 23:28:12.376580 2159 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-6417c65d59\" not found" node="ci-4081-3-6-n-6417c65d59" Apr 17 23:28:12.376840 kubelet[2159]: E0417 23:28:12.376729 2159 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-6417c65d59\" not found" node="ci-4081-3-6-n-6417c65d59" Apr 17 23:28:12.579059 kubelet[2159]: E0417 23:28:12.578750 2159 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081-3-6-n-6417c65d59\" not found" node="ci-4081-3-6-n-6417c65d59" Apr 17 23:28:12.597609 kubelet[2159]: I0417 23:28:12.597554 2159 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081-3-6-n-6417c65d59" Apr 17 23:28:12.616771 kubelet[2159]: I0417 23:28:12.616066 2159 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-6-n-6417c65d59" Apr 17 23:28:12.765260 kubelet[2159]: E0417 23:28:12.765152 2159 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081-3-6-n-6417c65d59\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4081-3-6-n-6417c65d59" Apr 17 23:28:12.765544 kubelet[2159]: I0417 23:28:12.765403 2159 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081-3-6-n-6417c65d59" Apr 17 23:28:12.786419 kubelet[2159]: E0417 23:28:12.786380 2159 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081-3-6-n-6417c65d59\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4081-3-6-n-6417c65d59" Apr 17 23:28:12.786419 kubelet[2159]: I0417 23:28:12.786414 2159 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-6-n-6417c65d59" Apr 17 23:28:12.791410 kubelet[2159]: E0417 23:28:12.791368 2159 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081-3-6-n-6417c65d59\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4081-3-6-n-6417c65d59" Apr 17 23:28:13.286803 kubelet[2159]: I0417 23:28:13.286504 2159 apiserver.go:52] "Watching apiserver" Apr 17 23:28:13.313677 kubelet[2159]: I0417 23:28:13.313572 2159 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Apr 17 23:28:13.408205 kubelet[2159]: I0417 23:28:13.408108 2159 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-6-n-6417c65d59" Apr 17 23:28:13.411129 kubelet[2159]: E0417 23:28:13.411100 2159 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081-3-6-n-6417c65d59\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4081-3-6-n-6417c65d59" Apr 17 23:28:13.640307 kubelet[2159]: I0417 23:28:13.640174 2159 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-6-n-6417c65d59" Apr 17 23:28:14.837252 systemd[1]: Reloading requested from client PID 2444 ('systemctl') (unit session-7.scope)... Apr 17 23:28:14.837269 systemd[1]: Reloading... Apr 17 23:28:14.932482 zram_generator::config[2484]: No configuration found. Apr 17 23:28:15.047003 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 17 23:28:15.142797 systemd[1]: Reloading finished in 305 ms. Apr 17 23:28:15.191373 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 23:28:15.208617 systemd[1]: kubelet.service: Deactivated successfully. Apr 17 23:28:15.209278 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 23:28:15.209581 systemd[1]: kubelet.service: Consumed 1.332s CPU time, 121.8M memory peak, 0B memory swap peak. Apr 17 23:28:15.216990 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 23:28:15.362413 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 23:28:15.374915 (kubelet)[2529]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 17 23:28:15.427072 kubelet[2529]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 17 23:28:15.427072 kubelet[2529]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 17 23:28:15.427072 kubelet[2529]: I0417 23:28:15.426767 2529 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 17 23:28:15.440593 kubelet[2529]: I0417 23:28:15.440554 2529 server.go:529] "Kubelet version" kubeletVersion="v1.34.4" Apr 17 23:28:15.440593 kubelet[2529]: I0417 23:28:15.440583 2529 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 17 23:28:15.440786 kubelet[2529]: I0417 23:28:15.440613 2529 watchdog_linux.go:95] "Systemd watchdog is not enabled" Apr 17 23:28:15.440786 kubelet[2529]: I0417 23:28:15.440619 2529 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 17 23:28:15.440876 kubelet[2529]: I0417 23:28:15.440851 2529 server.go:956] "Client rotation is on, will bootstrap in background" Apr 17 23:28:15.443544 kubelet[2529]: I0417 23:28:15.443514 2529 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Apr 17 23:28:15.446074 kubelet[2529]: I0417 23:28:15.445857 2529 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 17 23:28:15.449105 kubelet[2529]: E0417 23:28:15.449041 2529 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 17 23:28:15.449105 kubelet[2529]: I0417 23:28:15.449108 2529 server.go:1400] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Apr 17 23:28:15.452065 kubelet[2529]: I0417 23:28:15.452033 2529 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Apr 17 23:28:15.452265 kubelet[2529]: I0417 23:28:15.452241 2529 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 17 23:28:15.452438 kubelet[2529]: I0417 23:28:15.452267 2529 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-6-n-6417c65d59","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 17 23:28:15.452534 kubelet[2529]: I0417 23:28:15.452441 2529 topology_manager.go:138] "Creating topology manager with none policy" Apr 17 23:28:15.452534 kubelet[2529]: I0417 23:28:15.452467 2529 container_manager_linux.go:306] "Creating device plugin manager" Apr 17 23:28:15.452534 kubelet[2529]: I0417 23:28:15.452504 2529 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Apr 17 23:28:15.452764 kubelet[2529]: I0417 23:28:15.452750 2529 state_mem.go:36] "Initialized new in-memory state store" Apr 17 23:28:15.452922 kubelet[2529]: I0417 23:28:15.452912 2529 kubelet.go:475] "Attempting to sync node with API server" Apr 17 23:28:15.452952 kubelet[2529]: I0417 23:28:15.452928 2529 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 17 23:28:15.452986 kubelet[2529]: I0417 23:28:15.452959 2529 kubelet.go:387] "Adding apiserver pod source" Apr 17 23:28:15.452986 kubelet[2529]: I0417 23:28:15.452978 2529 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 17 23:28:15.455543 kubelet[2529]: I0417 23:28:15.455522 2529 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 17 23:28:15.456298 kubelet[2529]: I0417 23:28:15.456274 2529 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 17 23:28:15.458336 kubelet[2529]: I0417 23:28:15.458309 2529 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Apr 17 23:28:15.462752 kubelet[2529]: I0417 23:28:15.462730 2529 server.go:1262] "Started kubelet" Apr 17 23:28:15.465052 kubelet[2529]: I0417 23:28:15.465017 2529 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 17 23:28:15.466743 kubelet[2529]: I0417 23:28:15.466716 2529 server.go:310] "Adding debug handlers to kubelet server" Apr 17 23:28:15.476135 kubelet[2529]: I0417 23:28:15.475595 2529 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 17 23:28:15.476343 kubelet[2529]: I0417 23:28:15.476323 2529 server_v1.go:49] "podresources" method="list" useActivePods=true Apr 17 23:28:15.476615 kubelet[2529]: I0417 23:28:15.476598 2529 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 17 23:28:15.479854 kubelet[2529]: I0417 23:28:15.479826 2529 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 17 23:28:15.483673 kubelet[2529]: I0417 23:28:15.483628 2529 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 17 23:28:15.487060 kubelet[2529]: I0417 23:28:15.487032 2529 volume_manager.go:313] "Starting Kubelet Volume Manager" Apr 17 23:28:15.487522 kubelet[2529]: E0417 23:28:15.487500 2529 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4081-3-6-n-6417c65d59\" not found" Apr 17 23:28:15.488429 kubelet[2529]: I0417 23:28:15.488202 2529 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Apr 17 23:28:15.489119 kubelet[2529]: I0417 23:28:15.489095 2529 reconciler.go:29] "Reconciler: start to sync state" Apr 17 23:28:15.511236 kubelet[2529]: I0417 23:28:15.511198 2529 factory.go:223] Registration of the systemd container factory successfully Apr 17 23:28:15.511392 kubelet[2529]: I0417 23:28:15.511300 2529 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 17 23:28:15.513840 kubelet[2529]: I0417 23:28:15.513678 2529 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Apr 17 23:28:15.517241 kubelet[2529]: E0417 23:28:15.517212 2529 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 17 23:28:15.518309 kubelet[2529]: I0417 23:28:15.518137 2529 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Apr 17 23:28:15.518309 kubelet[2529]: I0417 23:28:15.518159 2529 status_manager.go:244] "Starting to sync pod status with apiserver" Apr 17 23:28:15.518309 kubelet[2529]: I0417 23:28:15.518177 2529 kubelet.go:2428] "Starting kubelet main sync loop" Apr 17 23:28:15.518309 kubelet[2529]: E0417 23:28:15.518244 2529 kubelet.go:2452] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 17 23:28:15.524162 kubelet[2529]: I0417 23:28:15.524116 2529 factory.go:223] Registration of the containerd container factory successfully Apr 17 23:28:15.564200 kubelet[2529]: I0417 23:28:15.563916 2529 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 17 23:28:15.564200 kubelet[2529]: I0417 23:28:15.563943 2529 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 17 23:28:15.564200 kubelet[2529]: I0417 23:28:15.563965 2529 state_mem.go:36] "Initialized new in-memory state store" Apr 17 23:28:15.564200 kubelet[2529]: I0417 23:28:15.564113 2529 state_mem.go:88] "Updated default CPUSet" cpuSet="" Apr 17 23:28:15.564200 kubelet[2529]: I0417 23:28:15.564123 2529 state_mem.go:96] "Updated CPUSet assignments" assignments={} Apr 17 23:28:15.564200 kubelet[2529]: I0417 23:28:15.564138 2529 policy_none.go:49] "None policy: Start" Apr 17 23:28:15.565410 kubelet[2529]: I0417 23:28:15.564327 2529 memory_manager.go:187] "Starting memorymanager" policy="None" Apr 17 23:28:15.565410 kubelet[2529]: I0417 23:28:15.564345 2529 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Apr 17 23:28:15.565410 kubelet[2529]: I0417 23:28:15.564515 2529 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Apr 17 23:28:15.565410 kubelet[2529]: I0417 23:28:15.564527 2529 policy_none.go:47] "Start" Apr 17 23:28:15.571624 kubelet[2529]: E0417 23:28:15.569729 2529 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 17 23:28:15.571624 kubelet[2529]: I0417 23:28:15.569906 2529 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 17 23:28:15.571624 kubelet[2529]: I0417 23:28:15.569917 2529 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 17 23:28:15.571624 kubelet[2529]: I0417 23:28:15.570150 2529 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 17 23:28:15.573480 kubelet[2529]: E0417 23:28:15.573057 2529 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 17 23:28:15.619772 kubelet[2529]: I0417 23:28:15.619695 2529 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-6-n-6417c65d59" Apr 17 23:28:15.620314 kubelet[2529]: I0417 23:28:15.620277 2529 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-6-n-6417c65d59" Apr 17 23:28:15.622485 kubelet[2529]: I0417 23:28:15.620784 2529 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081-3-6-n-6417c65d59" Apr 17 23:28:15.631893 kubelet[2529]: E0417 23:28:15.631835 2529 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081-3-6-n-6417c65d59\" already exists" pod="kube-system/kube-scheduler-ci-4081-3-6-n-6417c65d59" Apr 17 23:28:15.675147 kubelet[2529]: I0417 23:28:15.675116 2529 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-6-n-6417c65d59" Apr 17 23:28:15.684912 kubelet[2529]: I0417 23:28:15.684400 2529 kubelet_node_status.go:124] "Node was previously registered" node="ci-4081-3-6-n-6417c65d59" Apr 17 23:28:15.684912 kubelet[2529]: I0417 23:28:15.684538 2529 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081-3-6-n-6417c65d59" Apr 17 23:28:15.690061 kubelet[2529]: I0417 23:28:15.689680 2529 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e28221849c9a32cefc1c9b90887a5866-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-6-n-6417c65d59\" (UID: \"e28221849c9a32cefc1c9b90887a5866\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-6417c65d59" Apr 17 23:28:15.690061 kubelet[2529]: I0417 23:28:15.689724 2529 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e28221849c9a32cefc1c9b90887a5866-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-6-n-6417c65d59\" (UID: \"e28221849c9a32cefc1c9b90887a5866\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-6417c65d59" Apr 17 23:28:15.690061 kubelet[2529]: I0417 23:28:15.689758 2529 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7df473cb844c589f5cfa9716bc90bc99-kubeconfig\") pod \"kube-scheduler-ci-4081-3-6-n-6417c65d59\" (UID: \"7df473cb844c589f5cfa9716bc90bc99\") " pod="kube-system/kube-scheduler-ci-4081-3-6-n-6417c65d59" Apr 17 23:28:15.690061 kubelet[2529]: I0417 23:28:15.689779 2529 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/97aca4857ce9d5f7e69afd4065b4364e-k8s-certs\") pod \"kube-apiserver-ci-4081-3-6-n-6417c65d59\" (UID: \"97aca4857ce9d5f7e69afd4065b4364e\") " pod="kube-system/kube-apiserver-ci-4081-3-6-n-6417c65d59" Apr 17 23:28:15.690061 kubelet[2529]: I0417 23:28:15.689801 2529 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e28221849c9a32cefc1c9b90887a5866-ca-certs\") pod \"kube-controller-manager-ci-4081-3-6-n-6417c65d59\" (UID: \"e28221849c9a32cefc1c9b90887a5866\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-6417c65d59" Apr 17 23:28:15.692666 kubelet[2529]: I0417 23:28:15.689850 2529 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e28221849c9a32cefc1c9b90887a5866-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-6-n-6417c65d59\" (UID: \"e28221849c9a32cefc1c9b90887a5866\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-6417c65d59" Apr 17 23:28:15.692666 kubelet[2529]: I0417 23:28:15.689868 2529 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/97aca4857ce9d5f7e69afd4065b4364e-ca-certs\") pod \"kube-apiserver-ci-4081-3-6-n-6417c65d59\" (UID: \"97aca4857ce9d5f7e69afd4065b4364e\") " pod="kube-system/kube-apiserver-ci-4081-3-6-n-6417c65d59" Apr 17 23:28:15.692666 kubelet[2529]: I0417 23:28:15.689891 2529 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/97aca4857ce9d5f7e69afd4065b4364e-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-6-n-6417c65d59\" (UID: \"97aca4857ce9d5f7e69afd4065b4364e\") " pod="kube-system/kube-apiserver-ci-4081-3-6-n-6417c65d59" Apr 17 23:28:15.692666 kubelet[2529]: I0417 23:28:15.689931 2529 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e28221849c9a32cefc1c9b90887a5866-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-6-n-6417c65d59\" (UID: \"e28221849c9a32cefc1c9b90887a5866\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-6417c65d59" Apr 17 23:28:16.454480 kubelet[2529]: I0417 23:28:16.454412 2529 apiserver.go:52] "Watching apiserver" Apr 17 23:28:16.489047 kubelet[2529]: I0417 23:28:16.488978 2529 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Apr 17 23:28:16.545672 kubelet[2529]: I0417 23:28:16.545604 2529 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-6-n-6417c65d59" Apr 17 23:28:16.556469 kubelet[2529]: E0417 23:28:16.555733 2529 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081-3-6-n-6417c65d59\" already exists" pod="kube-system/kube-apiserver-ci-4081-3-6-n-6417c65d59" Apr 17 23:28:16.642312 kubelet[2529]: I0417 23:28:16.642215 2529 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081-3-6-n-6417c65d59" podStartSLOduration=3.64219876 podStartE2EDuration="3.64219876s" podCreationTimestamp="2026-04-17 23:28:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-17 23:28:16.6061088 +0000 UTC m=+1.224210361" watchObservedRunningTime="2026-04-17 23:28:16.64219876 +0000 UTC m=+1.260300281" Apr 17 23:28:16.661379 kubelet[2529]: I0417 23:28:16.661303 2529 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081-3-6-n-6417c65d59" podStartSLOduration=1.6612832 podStartE2EDuration="1.6612832s" podCreationTimestamp="2026-04-17 23:28:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-17 23:28:16.64354736 +0000 UTC m=+1.261648881" watchObservedRunningTime="2026-04-17 23:28:16.6612832 +0000 UTC m=+1.279384761" Apr 17 23:28:21.457949 kubelet[2529]: I0417 23:28:21.457666 2529 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Apr 17 23:28:21.458881 containerd[1486]: time="2026-04-17T23:28:21.458181960Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 17 23:28:21.459858 kubelet[2529]: I0417 23:28:21.459039 2529 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Apr 17 23:28:22.490762 kubelet[2529]: I0417 23:28:22.490615 2529 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081-3-6-n-6417c65d59" podStartSLOduration=7.49056932 podStartE2EDuration="7.49056932s" podCreationTimestamp="2026-04-17 23:28:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-17 23:28:16.6645168 +0000 UTC m=+1.282618361" watchObservedRunningTime="2026-04-17 23:28:22.49056932 +0000 UTC m=+7.108670881" Apr 17 23:28:22.504154 systemd[1]: Created slice kubepods-besteffort-podad4a454b_b5ca_42dc_b4e4_b5cea42d2580.slice - libcontainer container kubepods-besteffort-podad4a454b_b5ca_42dc_b4e4_b5cea42d2580.slice. Apr 17 23:28:22.539285 kubelet[2529]: I0417 23:28:22.539217 2529 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/ad4a454b-b5ca-42dc-b4e4-b5cea42d2580-kube-proxy\") pod \"kube-proxy-l5hnb\" (UID: \"ad4a454b-b5ca-42dc-b4e4-b5cea42d2580\") " pod="kube-system/kube-proxy-l5hnb" Apr 17 23:28:22.540063 kubelet[2529]: I0417 23:28:22.539713 2529 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ad4a454b-b5ca-42dc-b4e4-b5cea42d2580-lib-modules\") pod \"kube-proxy-l5hnb\" (UID: \"ad4a454b-b5ca-42dc-b4e4-b5cea42d2580\") " pod="kube-system/kube-proxy-l5hnb" Apr 17 23:28:22.540063 kubelet[2529]: I0417 23:28:22.539785 2529 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ad4a454b-b5ca-42dc-b4e4-b5cea42d2580-xtables-lock\") pod \"kube-proxy-l5hnb\" (UID: \"ad4a454b-b5ca-42dc-b4e4-b5cea42d2580\") " pod="kube-system/kube-proxy-l5hnb" Apr 17 23:28:22.540063 kubelet[2529]: I0417 23:28:22.539850 2529 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zkf9j\" (UniqueName: \"kubernetes.io/projected/ad4a454b-b5ca-42dc-b4e4-b5cea42d2580-kube-api-access-zkf9j\") pod \"kube-proxy-l5hnb\" (UID: \"ad4a454b-b5ca-42dc-b4e4-b5cea42d2580\") " pod="kube-system/kube-proxy-l5hnb" Apr 17 23:28:22.665945 systemd[1]: Created slice kubepods-besteffort-pod185127b5_5e67_42e4_9282_71e105156b12.slice - libcontainer container kubepods-besteffort-pod185127b5_5e67_42e4_9282_71e105156b12.slice. Apr 17 23:28:22.741925 kubelet[2529]: I0417 23:28:22.741661 2529 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8kpgg\" (UniqueName: \"kubernetes.io/projected/185127b5-5e67-42e4-9282-71e105156b12-kube-api-access-8kpgg\") pod \"tigera-operator-5588576f44-jdlpb\" (UID: \"185127b5-5e67-42e4-9282-71e105156b12\") " pod="tigera-operator/tigera-operator-5588576f44-jdlpb" Apr 17 23:28:22.741925 kubelet[2529]: I0417 23:28:22.741738 2529 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/185127b5-5e67-42e4-9282-71e105156b12-var-lib-calico\") pod \"tigera-operator-5588576f44-jdlpb\" (UID: \"185127b5-5e67-42e4-9282-71e105156b12\") " pod="tigera-operator/tigera-operator-5588576f44-jdlpb" Apr 17 23:28:22.815165 containerd[1486]: time="2026-04-17T23:28:22.815031920Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-l5hnb,Uid:ad4a454b-b5ca-42dc-b4e4-b5cea42d2580,Namespace:kube-system,Attempt:0,}" Apr 17 23:28:22.843305 containerd[1486]: time="2026-04-17T23:28:22.843081960Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:28:22.843444 containerd[1486]: time="2026-04-17T23:28:22.843383760Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:28:22.844077 containerd[1486]: time="2026-04-17T23:28:22.843441120Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:28:22.844077 containerd[1486]: time="2026-04-17T23:28:22.843770920Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:28:22.879918 systemd[1]: Started cri-containerd-639a63f126d4a181b4fbfd5cb8689e1f034491a733c5e7d9cfa6a00d45f56775.scope - libcontainer container 639a63f126d4a181b4fbfd5cb8689e1f034491a733c5e7d9cfa6a00d45f56775. Apr 17 23:28:22.914176 containerd[1486]: time="2026-04-17T23:28:22.914005440Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-l5hnb,Uid:ad4a454b-b5ca-42dc-b4e4-b5cea42d2580,Namespace:kube-system,Attempt:0,} returns sandbox id \"639a63f126d4a181b4fbfd5cb8689e1f034491a733c5e7d9cfa6a00d45f56775\"" Apr 17 23:28:22.924479 containerd[1486]: time="2026-04-17T23:28:22.924159200Z" level=info msg="CreateContainer within sandbox \"639a63f126d4a181b4fbfd5cb8689e1f034491a733c5e7d9cfa6a00d45f56775\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Apr 17 23:28:22.939947 containerd[1486]: time="2026-04-17T23:28:22.939852400Z" level=info msg="CreateContainer within sandbox \"639a63f126d4a181b4fbfd5cb8689e1f034491a733c5e7d9cfa6a00d45f56775\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"855d03007b884f8cfabf3d90bc672871e4abbacdebf6e04ac5016a06391ca057\"" Apr 17 23:28:22.942012 containerd[1486]: time="2026-04-17T23:28:22.941136480Z" level=info msg="StartContainer for \"855d03007b884f8cfabf3d90bc672871e4abbacdebf6e04ac5016a06391ca057\"" Apr 17 23:28:22.971841 systemd[1]: Started cri-containerd-855d03007b884f8cfabf3d90bc672871e4abbacdebf6e04ac5016a06391ca057.scope - libcontainer container 855d03007b884f8cfabf3d90bc672871e4abbacdebf6e04ac5016a06391ca057. Apr 17 23:28:22.974646 containerd[1486]: time="2026-04-17T23:28:22.974569920Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5588576f44-jdlpb,Uid:185127b5-5e67-42e4-9282-71e105156b12,Namespace:tigera-operator,Attempt:0,}" Apr 17 23:28:23.009892 containerd[1486]: time="2026-04-17T23:28:23.009674360Z" level=info msg="StartContainer for \"855d03007b884f8cfabf3d90bc672871e4abbacdebf6e04ac5016a06391ca057\" returns successfully" Apr 17 23:28:23.019293 containerd[1486]: time="2026-04-17T23:28:23.019120400Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:28:23.019293 containerd[1486]: time="2026-04-17T23:28:23.019217120Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:28:23.019293 containerd[1486]: time="2026-04-17T23:28:23.019244440Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:28:23.019561 containerd[1486]: time="2026-04-17T23:28:23.019363920Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:28:23.044027 systemd[1]: Started cri-containerd-0c524d87e50885b2baafedfe916053c54fc9f355e4ae516f8a040b9f13ef44bc.scope - libcontainer container 0c524d87e50885b2baafedfe916053c54fc9f355e4ae516f8a040b9f13ef44bc. Apr 17 23:28:23.084427 containerd[1486]: time="2026-04-17T23:28:23.083472400Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5588576f44-jdlpb,Uid:185127b5-5e67-42e4-9282-71e105156b12,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"0c524d87e50885b2baafedfe916053c54fc9f355e4ae516f8a040b9f13ef44bc\"" Apr 17 23:28:23.089048 containerd[1486]: time="2026-04-17T23:28:23.088527680Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\"" Apr 17 23:28:23.792087 kubelet[2529]: I0417 23:28:23.791964 2529 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-l5hnb" podStartSLOduration=1.7919424400000001 podStartE2EDuration="1.79194244s" podCreationTimestamp="2026-04-17 23:28:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-17 23:28:23.59632008 +0000 UTC m=+8.214421761" watchObservedRunningTime="2026-04-17 23:28:23.79194244 +0000 UTC m=+8.410044041" Apr 17 23:28:23.642252 systemd-resolved[1334]: Clock change detected. Flushing caches. Apr 17 23:28:23.649952 systemd-journald[1125]: Time jumped backwards, rotating. Apr 17 23:28:23.642356 systemd-timesyncd[1366]: Contacted time server 172.104.149.161:123 (2.flatcar.pool.ntp.org). Apr 17 23:28:23.642470 systemd-timesyncd[1366]: Initial clock synchronization to Fri 2026-04-17 23:28:23.641985 UTC. Apr 17 23:28:24.228395 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount464615657.mount: Deactivated successfully. Apr 17 23:28:24.920866 containerd[1486]: time="2026-04-17T23:28:24.920792590Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.40.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:28:24.922711 containerd[1486]: time="2026-04-17T23:28:24.922653310Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.40.7: active requests=0, bytes read=25071565" Apr 17 23:28:24.924615 containerd[1486]: time="2026-04-17T23:28:24.923576150Z" level=info msg="ImageCreate event name:\"sha256:b2fef69c2456aa0a6f6dcb63425a69d11dc35a73b1883b250e4d92f5a697fefe\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:28:24.926501 containerd[1486]: time="2026-04-17T23:28:24.926452230Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:28:24.927250 containerd[1486]: time="2026-04-17T23:28:24.927198230Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.40.7\" with image id \"sha256:b2fef69c2456aa0a6f6dcb63425a69d11dc35a73b1883b250e4d92f5a697fefe\", repo tag \"quay.io/tigera/operator:v1.40.7\", repo digest \"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\", size \"25067560\" in 2.293206s" Apr 17 23:28:24.927250 containerd[1486]: time="2026-04-17T23:28:24.927242870Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\" returns image reference \"sha256:b2fef69c2456aa0a6f6dcb63425a69d11dc35a73b1883b250e4d92f5a697fefe\"" Apr 17 23:28:24.936120 containerd[1486]: time="2026-04-17T23:28:24.936070830Z" level=info msg="CreateContainer within sandbox \"0c524d87e50885b2baafedfe916053c54fc9f355e4ae516f8a040b9f13ef44bc\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Apr 17 23:28:24.960534 containerd[1486]: time="2026-04-17T23:28:24.960472790Z" level=info msg="CreateContainer within sandbox \"0c524d87e50885b2baafedfe916053c54fc9f355e4ae516f8a040b9f13ef44bc\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"402b2e66f0cb26ef411167b8504b5aa63838b952839f47bdebd3a6f2a05c907c\"" Apr 17 23:28:24.962310 containerd[1486]: time="2026-04-17T23:28:24.962204950Z" level=info msg="StartContainer for \"402b2e66f0cb26ef411167b8504b5aa63838b952839f47bdebd3a6f2a05c907c\"" Apr 17 23:28:25.003156 systemd[1]: Started cri-containerd-402b2e66f0cb26ef411167b8504b5aa63838b952839f47bdebd3a6f2a05c907c.scope - libcontainer container 402b2e66f0cb26ef411167b8504b5aa63838b952839f47bdebd3a6f2a05c907c. Apr 17 23:28:25.036515 containerd[1486]: time="2026-04-17T23:28:25.036365910Z" level=info msg="StartContainer for \"402b2e66f0cb26ef411167b8504b5aa63838b952839f47bdebd3a6f2a05c907c\" returns successfully" Apr 17 23:28:25.229754 systemd[1]: run-containerd-runc-k8s.io-402b2e66f0cb26ef411167b8504b5aa63838b952839f47bdebd3a6f2a05c907c-runc.W0i3KW.mount: Deactivated successfully. Apr 17 23:28:30.413899 update_engine[1467]: I20260417 23:28:30.413250 1467 update_attempter.cc:509] Updating boot flags... Apr 17 23:28:30.484021 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 32 scanned by (udev-worker) (2912) Apr 17 23:28:31.229390 sudo[1679]: pam_unix(sudo:session): session closed for user root Apr 17 23:28:31.246989 sshd[1676]: pam_unix(sshd:session): session closed for user core Apr 17 23:28:31.253521 systemd[1]: sshd@6-159.69.127.159:22-50.85.169.122:42742.service: Deactivated successfully. Apr 17 23:28:31.257384 systemd[1]: session-7.scope: Deactivated successfully. Apr 17 23:28:31.258979 systemd[1]: session-7.scope: Consumed 7.711s CPU time, 152.8M memory peak, 0B memory swap peak. Apr 17 23:28:31.259699 systemd-logind[1466]: Session 7 logged out. Waiting for processes to exit. Apr 17 23:28:31.261004 systemd-logind[1466]: Removed session 7. Apr 17 23:28:37.048343 kubelet[2529]: I0417 23:28:37.048264 2529 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-5588576f44-jdlpb" podStartSLOduration=12.75133839 podStartE2EDuration="15.04824619s" podCreationTimestamp="2026-04-17 23:28:22 +0000 UTC" firstStartedPulling="2026-04-17 23:28:23.08676488 +0000 UTC m=+7.704866441" lastFinishedPulling="2026-04-17 23:28:24.92909595 +0000 UTC m=+10.001774241" observedRunningTime="2026-04-17 23:28:25.13973359 +0000 UTC m=+10.212411881" watchObservedRunningTime="2026-04-17 23:28:37.04824619 +0000 UTC m=+22.120924481" Apr 17 23:28:37.061868 systemd[1]: Created slice kubepods-besteffort-pod60c7e0b6_242a_43a5_869b_c9153105ebdc.slice - libcontainer container kubepods-besteffort-pod60c7e0b6_242a_43a5_869b_c9153105ebdc.slice. Apr 17 23:28:37.086578 kubelet[2529]: I0417 23:28:37.086537 2529 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dmss9\" (UniqueName: \"kubernetes.io/projected/60c7e0b6-242a-43a5-869b-c9153105ebdc-kube-api-access-dmss9\") pod \"calico-typha-7f94b9b8c7-t7944\" (UID: \"60c7e0b6-242a-43a5-869b-c9153105ebdc\") " pod="calico-system/calico-typha-7f94b9b8c7-t7944" Apr 17 23:28:37.086578 kubelet[2529]: I0417 23:28:37.086585 2529 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/60c7e0b6-242a-43a5-869b-c9153105ebdc-typha-certs\") pod \"calico-typha-7f94b9b8c7-t7944\" (UID: \"60c7e0b6-242a-43a5-869b-c9153105ebdc\") " pod="calico-system/calico-typha-7f94b9b8c7-t7944" Apr 17 23:28:37.086722 kubelet[2529]: I0417 23:28:37.086603 2529 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/60c7e0b6-242a-43a5-869b-c9153105ebdc-tigera-ca-bundle\") pod \"calico-typha-7f94b9b8c7-t7944\" (UID: \"60c7e0b6-242a-43a5-869b-c9153105ebdc\") " pod="calico-system/calico-typha-7f94b9b8c7-t7944" Apr 17 23:28:37.186603 systemd[1]: Created slice kubepods-besteffort-pod0c5fa3c3_0a95_4be1_9167_41dd070732fc.slice - libcontainer container kubepods-besteffort-pod0c5fa3c3_0a95_4be1_9167_41dd070732fc.slice. Apr 17 23:28:37.188074 kubelet[2529]: I0417 23:28:37.188034 2529 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/0c5fa3c3-0a95-4be1-9167-41dd070732fc-sys-fs\") pod \"calico-node-n4vxv\" (UID: \"0c5fa3c3-0a95-4be1-9167-41dd070732fc\") " pod="calico-system/calico-node-n4vxv" Apr 17 23:28:37.188192 kubelet[2529]: I0417 23:28:37.188086 2529 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/0c5fa3c3-0a95-4be1-9167-41dd070732fc-cni-net-dir\") pod \"calico-node-n4vxv\" (UID: \"0c5fa3c3-0a95-4be1-9167-41dd070732fc\") " pod="calico-system/calico-node-n4vxv" Apr 17 23:28:37.188192 kubelet[2529]: I0417 23:28:37.188104 2529 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/0c5fa3c3-0a95-4be1-9167-41dd070732fc-var-lib-calico\") pod \"calico-node-n4vxv\" (UID: \"0c5fa3c3-0a95-4be1-9167-41dd070732fc\") " pod="calico-system/calico-node-n4vxv" Apr 17 23:28:37.188192 kubelet[2529]: I0417 23:28:37.188123 2529 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0c5fa3c3-0a95-4be1-9167-41dd070732fc-xtables-lock\") pod \"calico-node-n4vxv\" (UID: \"0c5fa3c3-0a95-4be1-9167-41dd070732fc\") " pod="calico-system/calico-node-n4vxv" Apr 17 23:28:37.188192 kubelet[2529]: I0417 23:28:37.188146 2529 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0c5fa3c3-0a95-4be1-9167-41dd070732fc-tigera-ca-bundle\") pod \"calico-node-n4vxv\" (UID: \"0c5fa3c3-0a95-4be1-9167-41dd070732fc\") " pod="calico-system/calico-node-n4vxv" Apr 17 23:28:37.188192 kubelet[2529]: I0417 23:28:37.188161 2529 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/0c5fa3c3-0a95-4be1-9167-41dd070732fc-flexvol-driver-host\") pod \"calico-node-n4vxv\" (UID: \"0c5fa3c3-0a95-4be1-9167-41dd070732fc\") " pod="calico-system/calico-node-n4vxv" Apr 17 23:28:37.188276 kubelet[2529]: I0417 23:28:37.188176 2529 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/0c5fa3c3-0a95-4be1-9167-41dd070732fc-node-certs\") pod \"calico-node-n4vxv\" (UID: \"0c5fa3c3-0a95-4be1-9167-41dd070732fc\") " pod="calico-system/calico-node-n4vxv" Apr 17 23:28:37.188276 kubelet[2529]: I0417 23:28:37.188189 2529 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nodeproc\" (UniqueName: \"kubernetes.io/host-path/0c5fa3c3-0a95-4be1-9167-41dd070732fc-nodeproc\") pod \"calico-node-n4vxv\" (UID: \"0c5fa3c3-0a95-4be1-9167-41dd070732fc\") " pod="calico-system/calico-node-n4vxv" Apr 17 23:28:37.188276 kubelet[2529]: I0417 23:28:37.188203 2529 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2bt5x\" (UniqueName: \"kubernetes.io/projected/0c5fa3c3-0a95-4be1-9167-41dd070732fc-kube-api-access-2bt5x\") pod \"calico-node-n4vxv\" (UID: \"0c5fa3c3-0a95-4be1-9167-41dd070732fc\") " pod="calico-system/calico-node-n4vxv" Apr 17 23:28:37.188276 kubelet[2529]: I0417 23:28:37.188230 2529 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/0c5fa3c3-0a95-4be1-9167-41dd070732fc-var-run-calico\") pod \"calico-node-n4vxv\" (UID: \"0c5fa3c3-0a95-4be1-9167-41dd070732fc\") " pod="calico-system/calico-node-n4vxv" Apr 17 23:28:37.188276 kubelet[2529]: I0417 23:28:37.188245 2529 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/0c5fa3c3-0a95-4be1-9167-41dd070732fc-bpffs\") pod \"calico-node-n4vxv\" (UID: \"0c5fa3c3-0a95-4be1-9167-41dd070732fc\") " pod="calico-system/calico-node-n4vxv" Apr 17 23:28:37.188357 kubelet[2529]: I0417 23:28:37.188260 2529 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/0c5fa3c3-0a95-4be1-9167-41dd070732fc-policysync\") pod \"calico-node-n4vxv\" (UID: \"0c5fa3c3-0a95-4be1-9167-41dd070732fc\") " pod="calico-system/calico-node-n4vxv" Apr 17 23:28:37.188357 kubelet[2529]: I0417 23:28:37.188276 2529 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/0c5fa3c3-0a95-4be1-9167-41dd070732fc-cni-log-dir\") pod \"calico-node-n4vxv\" (UID: \"0c5fa3c3-0a95-4be1-9167-41dd070732fc\") " pod="calico-system/calico-node-n4vxv" Apr 17 23:28:37.188357 kubelet[2529]: I0417 23:28:37.188290 2529 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/0c5fa3c3-0a95-4be1-9167-41dd070732fc-cni-bin-dir\") pod \"calico-node-n4vxv\" (UID: \"0c5fa3c3-0a95-4be1-9167-41dd070732fc\") " pod="calico-system/calico-node-n4vxv" Apr 17 23:28:37.188357 kubelet[2529]: I0417 23:28:37.188308 2529 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0c5fa3c3-0a95-4be1-9167-41dd070732fc-lib-modules\") pod \"calico-node-n4vxv\" (UID: \"0c5fa3c3-0a95-4be1-9167-41dd070732fc\") " pod="calico-system/calico-node-n4vxv" Apr 17 23:28:37.291901 kubelet[2529]: E0417 23:28:37.291585 2529 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zxqb4" podUID="d8d9fd23-5b21-4171-9475-81c4da566eda" Apr 17 23:28:37.294381 kubelet[2529]: E0417 23:28:37.294247 2529 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:28:37.294381 kubelet[2529]: W0417 23:28:37.294281 2529 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:28:37.294381 kubelet[2529]: E0417 23:28:37.294303 2529 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:28:37.300460 kubelet[2529]: E0417 23:28:37.299744 2529 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:28:37.300460 kubelet[2529]: W0417 23:28:37.299781 2529 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:28:37.300460 kubelet[2529]: E0417 23:28:37.299805 2529 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:28:37.323639 kubelet[2529]: E0417 23:28:37.323591 2529 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:28:37.323639 kubelet[2529]: W0417 23:28:37.323619 2529 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:28:37.323639 kubelet[2529]: E0417 23:28:37.323641 2529 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:28:37.372889 containerd[1486]: time="2026-04-17T23:28:37.372831430Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7f94b9b8c7-t7944,Uid:60c7e0b6-242a-43a5-869b-c9153105ebdc,Namespace:calico-system,Attempt:0,}" Apr 17 23:28:37.377375 kubelet[2529]: E0417 23:28:37.377143 2529 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:28:37.377375 kubelet[2529]: W0417 23:28:37.377171 2529 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:28:37.377375 kubelet[2529]: E0417 23:28:37.377194 2529 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:28:37.377766 kubelet[2529]: E0417 23:28:37.377657 2529 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:28:37.377766 kubelet[2529]: W0417 23:28:37.377687 2529 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:28:37.377766 kubelet[2529]: E0417 23:28:37.377736 2529 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:28:37.378199 kubelet[2529]: E0417 23:28:37.378184 2529 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:28:37.378355 kubelet[2529]: W0417 23:28:37.378276 2529 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:28:37.378355 kubelet[2529]: E0417 23:28:37.378294 2529 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:28:37.378855 kubelet[2529]: E0417 23:28:37.378838 2529 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:28:37.379002 kubelet[2529]: W0417 23:28:37.378899 2529 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:28:37.379002 kubelet[2529]: E0417 23:28:37.378928 2529 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:28:37.379397 kubelet[2529]: E0417 23:28:37.379324 2529 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:28:37.379397 kubelet[2529]: W0417 23:28:37.379337 2529 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:28:37.379397 kubelet[2529]: E0417 23:28:37.379351 2529 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:28:37.379869 kubelet[2529]: E0417 23:28:37.379811 2529 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:28:37.379869 kubelet[2529]: W0417 23:28:37.379826 2529 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:28:37.379869 kubelet[2529]: E0417 23:28:37.379837 2529 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:28:37.381215 kubelet[2529]: E0417 23:28:37.380411 2529 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:28:37.381215 kubelet[2529]: W0417 23:28:37.380426 2529 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:28:37.381215 kubelet[2529]: E0417 23:28:37.380439 2529 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:28:37.381572 kubelet[2529]: E0417 23:28:37.381522 2529 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:28:37.381572 kubelet[2529]: W0417 23:28:37.381536 2529 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:28:37.381572 kubelet[2529]: E0417 23:28:37.381547 2529 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:28:37.382037 kubelet[2529]: E0417 23:28:37.381943 2529 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:28:37.382037 kubelet[2529]: W0417 23:28:37.381967 2529 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:28:37.382037 kubelet[2529]: E0417 23:28:37.381978 2529 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:28:37.382631 kubelet[2529]: E0417 23:28:37.382512 2529 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:28:37.382631 kubelet[2529]: W0417 23:28:37.382527 2529 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:28:37.382631 kubelet[2529]: E0417 23:28:37.382538 2529 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:28:37.383844 kubelet[2529]: E0417 23:28:37.383390 2529 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:28:37.384112 kubelet[2529]: W0417 23:28:37.383931 2529 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:28:37.384112 kubelet[2529]: E0417 23:28:37.383953 2529 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:28:37.384262 kubelet[2529]: E0417 23:28:37.384248 2529 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:28:37.384313 kubelet[2529]: W0417 23:28:37.384303 2529 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:28:37.384368 kubelet[2529]: E0417 23:28:37.384358 2529 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:28:37.384869 kubelet[2529]: E0417 23:28:37.384724 2529 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:28:37.384869 kubelet[2529]: W0417 23:28:37.384738 2529 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:28:37.384869 kubelet[2529]: E0417 23:28:37.384749 2529 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:28:37.385917 kubelet[2529]: E0417 23:28:37.385866 2529 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:28:37.388162 kubelet[2529]: W0417 23:28:37.387950 2529 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:28:37.388162 kubelet[2529]: E0417 23:28:37.387982 2529 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:28:37.391354 kubelet[2529]: E0417 23:28:37.391163 2529 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:28:37.391354 kubelet[2529]: W0417 23:28:37.391188 2529 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:28:37.391354 kubelet[2529]: E0417 23:28:37.391207 2529 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:28:37.394551 kubelet[2529]: E0417 23:28:37.394191 2529 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:28:37.394551 kubelet[2529]: W0417 23:28:37.394214 2529 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:28:37.394551 kubelet[2529]: E0417 23:28:37.394235 2529 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:28:37.398213 kubelet[2529]: E0417 23:28:37.398025 2529 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:28:37.398213 kubelet[2529]: W0417 23:28:37.398054 2529 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:28:37.398213 kubelet[2529]: E0417 23:28:37.398076 2529 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:28:37.398526 kubelet[2529]: E0417 23:28:37.398428 2529 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:28:37.398526 kubelet[2529]: W0417 23:28:37.398440 2529 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:28:37.398526 kubelet[2529]: E0417 23:28:37.398450 2529 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:28:37.398651 kubelet[2529]: E0417 23:28:37.398639 2529 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:28:37.398709 kubelet[2529]: W0417 23:28:37.398697 2529 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:28:37.398763 kubelet[2529]: E0417 23:28:37.398752 2529 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:28:37.400221 kubelet[2529]: E0417 23:28:37.399972 2529 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:28:37.400221 kubelet[2529]: W0417 23:28:37.399992 2529 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:28:37.400221 kubelet[2529]: E0417 23:28:37.400005 2529 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:28:37.400789 kubelet[2529]: E0417 23:28:37.400632 2529 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:28:37.400789 kubelet[2529]: W0417 23:28:37.400647 2529 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:28:37.400789 kubelet[2529]: E0417 23:28:37.400659 2529 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:28:37.400789 kubelet[2529]: I0417 23:28:37.400689 2529 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d8d9fd23-5b21-4171-9475-81c4da566eda-kubelet-dir\") pod \"csi-node-driver-zxqb4\" (UID: \"d8d9fd23-5b21-4171-9475-81c4da566eda\") " pod="calico-system/csi-node-driver-zxqb4" Apr 17 23:28:37.401335 kubelet[2529]: E0417 23:28:37.401135 2529 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:28:37.401335 kubelet[2529]: W0417 23:28:37.401247 2529 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:28:37.401335 kubelet[2529]: E0417 23:28:37.401262 2529 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:28:37.401335 kubelet[2529]: I0417 23:28:37.401291 2529 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nw6jk\" (UniqueName: \"kubernetes.io/projected/d8d9fd23-5b21-4171-9475-81c4da566eda-kube-api-access-nw6jk\") pod \"csi-node-driver-zxqb4\" (UID: \"d8d9fd23-5b21-4171-9475-81c4da566eda\") " pod="calico-system/csi-node-driver-zxqb4" Apr 17 23:28:37.403002 kubelet[2529]: E0417 23:28:37.402968 2529 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:28:37.403002 kubelet[2529]: W0417 23:28:37.402994 2529 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:28:37.403002 kubelet[2529]: E0417 23:28:37.403008 2529 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:28:37.405016 kubelet[2529]: E0417 23:28:37.404848 2529 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:28:37.405016 kubelet[2529]: W0417 23:28:37.404887 2529 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:28:37.405016 kubelet[2529]: E0417 23:28:37.404912 2529 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:28:37.406347 kubelet[2529]: E0417 23:28:37.406316 2529 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:28:37.406347 kubelet[2529]: W0417 23:28:37.406340 2529 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:28:37.406439 kubelet[2529]: E0417 23:28:37.406359 2529 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:28:37.406733 kubelet[2529]: I0417 23:28:37.406705 2529 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/d8d9fd23-5b21-4171-9475-81c4da566eda-varrun\") pod \"csi-node-driver-zxqb4\" (UID: \"d8d9fd23-5b21-4171-9475-81c4da566eda\") " pod="calico-system/csi-node-driver-zxqb4" Apr 17 23:28:37.407008 kubelet[2529]: E0417 23:28:37.406986 2529 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:28:37.407008 kubelet[2529]: W0417 23:28:37.407005 2529 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:28:37.407110 kubelet[2529]: E0417 23:28:37.407020 2529 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:28:37.407622 kubelet[2529]: E0417 23:28:37.407498 2529 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:28:37.407622 kubelet[2529]: W0417 23:28:37.407517 2529 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:28:37.407622 kubelet[2529]: E0417 23:28:37.407531 2529 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:28:37.407844 kubelet[2529]: E0417 23:28:37.407805 2529 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:28:37.407844 kubelet[2529]: W0417 23:28:37.407823 2529 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:28:37.407937 kubelet[2529]: E0417 23:28:37.407834 2529 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:28:37.408465 kubelet[2529]: I0417 23:28:37.408057 2529 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/d8d9fd23-5b21-4171-9475-81c4da566eda-socket-dir\") pod \"csi-node-driver-zxqb4\" (UID: \"d8d9fd23-5b21-4171-9475-81c4da566eda\") " pod="calico-system/csi-node-driver-zxqb4" Apr 17 23:28:37.408666 kubelet[2529]: E0417 23:28:37.408645 2529 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:28:37.408666 kubelet[2529]: W0417 23:28:37.408663 2529 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:28:37.408950 kubelet[2529]: E0417 23:28:37.408675 2529 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:28:37.409116 kubelet[2529]: E0417 23:28:37.409057 2529 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:28:37.409116 kubelet[2529]: W0417 23:28:37.409086 2529 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:28:37.409116 kubelet[2529]: E0417 23:28:37.409098 2529 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:28:37.409361 kubelet[2529]: E0417 23:28:37.409340 2529 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:28:37.409361 kubelet[2529]: W0417 23:28:37.409355 2529 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:28:37.409419 kubelet[2529]: E0417 23:28:37.409364 2529 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:28:37.409419 kubelet[2529]: I0417 23:28:37.409401 2529 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/d8d9fd23-5b21-4171-9475-81c4da566eda-registration-dir\") pod \"csi-node-driver-zxqb4\" (UID: \"d8d9fd23-5b21-4171-9475-81c4da566eda\") " pod="calico-system/csi-node-driver-zxqb4" Apr 17 23:28:37.410811 kubelet[2529]: E0417 23:28:37.410555 2529 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:28:37.410811 kubelet[2529]: W0417 23:28:37.410572 2529 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:28:37.410811 kubelet[2529]: E0417 23:28:37.410583 2529 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:28:37.410811 kubelet[2529]: E0417 23:28:37.410796 2529 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:28:37.410811 kubelet[2529]: W0417 23:28:37.410804 2529 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:28:37.410811 kubelet[2529]: E0417 23:28:37.410812 2529 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:28:37.411003 kubelet[2529]: E0417 23:28:37.410983 2529 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:28:37.411003 kubelet[2529]: W0417 23:28:37.410991 2529 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:28:37.411003 kubelet[2529]: E0417 23:28:37.410999 2529 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:28:37.412573 kubelet[2529]: E0417 23:28:37.411248 2529 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:28:37.412573 kubelet[2529]: W0417 23:28:37.411265 2529 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:28:37.412573 kubelet[2529]: E0417 23:28:37.411275 2529 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:28:37.425327 containerd[1486]: time="2026-04-17T23:28:37.425201510Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:28:37.425463 containerd[1486]: time="2026-04-17T23:28:37.425338350Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:28:37.425463 containerd[1486]: time="2026-04-17T23:28:37.425364910Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:28:37.425598 containerd[1486]: time="2026-04-17T23:28:37.425471590Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:28:37.454149 systemd[1]: Started cri-containerd-b834a56d14fc971258693b600c1dae25937335bc41ba4cfa18e81969e45710eb.scope - libcontainer container b834a56d14fc971258693b600c1dae25937335bc41ba4cfa18e81969e45710eb. Apr 17 23:28:37.502323 containerd[1486]: time="2026-04-17T23:28:37.502148230Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-n4vxv,Uid:0c5fa3c3-0a95-4be1-9167-41dd070732fc,Namespace:calico-system,Attempt:0,}" Apr 17 23:28:37.510391 containerd[1486]: time="2026-04-17T23:28:37.510266190Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7f94b9b8c7-t7944,Uid:60c7e0b6-242a-43a5-869b-c9153105ebdc,Namespace:calico-system,Attempt:0,} returns sandbox id \"b834a56d14fc971258693b600c1dae25937335bc41ba4cfa18e81969e45710eb\"" Apr 17 23:28:37.512850 kubelet[2529]: E0417 23:28:37.512809 2529 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:28:37.512850 kubelet[2529]: W0417 23:28:37.512834 2529 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:28:37.512850 kubelet[2529]: E0417 23:28:37.512856 2529 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:28:37.513813 kubelet[2529]: E0417 23:28:37.513790 2529 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:28:37.514671 containerd[1486]: time="2026-04-17T23:28:37.514542670Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\"" Apr 17 23:28:37.514854 kubelet[2529]: W0417 23:28:37.513808 2529 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:28:37.514854 kubelet[2529]: E0417 23:28:37.514766 2529 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:28:37.515728 kubelet[2529]: E0417 23:28:37.515706 2529 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:28:37.515728 kubelet[2529]: W0417 23:28:37.515725 2529 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:28:37.515813 kubelet[2529]: E0417 23:28:37.515740 2529 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:28:37.517340 kubelet[2529]: E0417 23:28:37.517315 2529 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:28:37.517340 kubelet[2529]: W0417 23:28:37.517337 2529 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:28:37.517492 kubelet[2529]: E0417 23:28:37.517352 2529 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:28:37.518385 kubelet[2529]: E0417 23:28:37.517705 2529 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:28:37.518385 kubelet[2529]: W0417 23:28:37.518231 2529 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:28:37.518385 kubelet[2529]: E0417 23:28:37.518267 2529 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:28:37.518511 kubelet[2529]: E0417 23:28:37.518460 2529 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:28:37.518511 kubelet[2529]: W0417 23:28:37.518470 2529 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:28:37.518511 kubelet[2529]: E0417 23:28:37.518478 2529 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:28:37.519063 kubelet[2529]: E0417 23:28:37.518620 2529 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:28:37.519063 kubelet[2529]: W0417 23:28:37.518632 2529 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:28:37.519063 kubelet[2529]: E0417 23:28:37.518640 2529 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:28:37.519063 kubelet[2529]: E0417 23:28:37.518767 2529 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:28:37.519063 kubelet[2529]: W0417 23:28:37.518774 2529 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:28:37.519063 kubelet[2529]: E0417 23:28:37.518785 2529 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:28:37.519063 kubelet[2529]: E0417 23:28:37.518942 2529 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:28:37.519063 kubelet[2529]: W0417 23:28:37.518950 2529 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:28:37.519063 kubelet[2529]: E0417 23:28:37.518959 2529 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:28:37.520267 kubelet[2529]: E0417 23:28:37.519277 2529 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:28:37.520267 kubelet[2529]: W0417 23:28:37.519295 2529 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:28:37.520267 kubelet[2529]: E0417 23:28:37.519308 2529 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:28:37.520267 kubelet[2529]: E0417 23:28:37.519560 2529 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:28:37.520267 kubelet[2529]: W0417 23:28:37.519570 2529 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:28:37.520267 kubelet[2529]: E0417 23:28:37.519581 2529 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:28:37.520267 kubelet[2529]: E0417 23:28:37.519795 2529 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:28:37.520267 kubelet[2529]: W0417 23:28:37.519804 2529 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:28:37.520267 kubelet[2529]: E0417 23:28:37.519813 2529 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:28:37.520267 kubelet[2529]: E0417 23:28:37.519978 2529 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:28:37.520438 kubelet[2529]: W0417 23:28:37.519986 2529 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:28:37.520438 kubelet[2529]: E0417 23:28:37.519995 2529 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:28:37.520438 kubelet[2529]: E0417 23:28:37.520159 2529 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:28:37.520438 kubelet[2529]: W0417 23:28:37.520167 2529 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:28:37.520438 kubelet[2529]: E0417 23:28:37.520175 2529 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:28:37.520438 kubelet[2529]: E0417 23:28:37.520307 2529 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:28:37.520438 kubelet[2529]: W0417 23:28:37.520314 2529 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:28:37.520438 kubelet[2529]: E0417 23:28:37.520322 2529 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:28:37.520574 kubelet[2529]: E0417 23:28:37.520474 2529 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:28:37.520574 kubelet[2529]: W0417 23:28:37.520481 2529 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:28:37.520574 kubelet[2529]: E0417 23:28:37.520489 2529 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:28:37.520623 kubelet[2529]: E0417 23:28:37.520601 2529 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:28:37.520623 kubelet[2529]: W0417 23:28:37.520607 2529 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:28:37.520623 kubelet[2529]: E0417 23:28:37.520614 2529 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:28:37.521012 kubelet[2529]: E0417 23:28:37.520716 2529 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:28:37.521012 kubelet[2529]: W0417 23:28:37.520731 2529 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:28:37.521012 kubelet[2529]: E0417 23:28:37.520739 2529 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:28:37.521012 kubelet[2529]: E0417 23:28:37.520945 2529 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:28:37.521012 kubelet[2529]: W0417 23:28:37.520955 2529 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:28:37.521012 kubelet[2529]: E0417 23:28:37.520967 2529 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:28:37.521384 kubelet[2529]: E0417 23:28:37.521144 2529 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:28:37.521384 kubelet[2529]: W0417 23:28:37.521154 2529 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:28:37.521384 kubelet[2529]: E0417 23:28:37.521165 2529 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:28:37.521384 kubelet[2529]: E0417 23:28:37.521337 2529 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:28:37.521384 kubelet[2529]: W0417 23:28:37.521346 2529 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:28:37.521384 kubelet[2529]: E0417 23:28:37.521354 2529 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:28:37.521766 kubelet[2529]: E0417 23:28:37.521504 2529 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:28:37.521766 kubelet[2529]: W0417 23:28:37.521513 2529 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:28:37.521766 kubelet[2529]: E0417 23:28:37.521521 2529 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:28:37.523172 kubelet[2529]: E0417 23:28:37.523007 2529 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:28:37.523172 kubelet[2529]: W0417 23:28:37.523026 2529 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:28:37.523172 kubelet[2529]: E0417 23:28:37.523041 2529 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:28:37.523397 kubelet[2529]: E0417 23:28:37.523341 2529 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:28:37.523397 kubelet[2529]: W0417 23:28:37.523352 2529 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:28:37.523397 kubelet[2529]: E0417 23:28:37.523364 2529 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:28:37.523709 kubelet[2529]: E0417 23:28:37.523654 2529 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:28:37.523709 kubelet[2529]: W0417 23:28:37.523665 2529 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:28:37.523709 kubelet[2529]: E0417 23:28:37.523675 2529 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:28:37.540883 containerd[1486]: time="2026-04-17T23:28:37.540390430Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:28:37.543308 containerd[1486]: time="2026-04-17T23:28:37.543066190Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:28:37.543308 containerd[1486]: time="2026-04-17T23:28:37.543101710Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:28:37.543308 containerd[1486]: time="2026-04-17T23:28:37.543210430Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:28:37.544475 kubelet[2529]: E0417 23:28:37.544368 2529 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:28:37.544475 kubelet[2529]: W0417 23:28:37.544401 2529 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:28:37.544475 kubelet[2529]: E0417 23:28:37.544422 2529 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:28:37.562107 systemd[1]: Started cri-containerd-1b88c1847746ce1f594c14d3c8b139d67bb5c1ef79d0f29ad46188790936a4c0.scope - libcontainer container 1b88c1847746ce1f594c14d3c8b139d67bb5c1ef79d0f29ad46188790936a4c0. Apr 17 23:28:37.595562 containerd[1486]: time="2026-04-17T23:28:37.595433270Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-n4vxv,Uid:0c5fa3c3-0a95-4be1-9167-41dd070732fc,Namespace:calico-system,Attempt:0,} returns sandbox id \"1b88c1847746ce1f594c14d3c8b139d67bb5c1ef79d0f29ad46188790936a4c0\"" Apr 17 23:28:39.066548 kubelet[2529]: E0417 23:28:39.066492 2529 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zxqb4" podUID="d8d9fd23-5b21-4171-9475-81c4da566eda" Apr 17 23:28:39.076165 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4123473188.mount: Deactivated successfully. Apr 17 23:28:39.616228 containerd[1486]: time="2026-04-17T23:28:39.616182630Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:28:39.618354 containerd[1486]: time="2026-04-17T23:28:39.618308990Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.31.4: active requests=0, bytes read=33865174" Apr 17 23:28:39.619382 containerd[1486]: time="2026-04-17T23:28:39.619329750Z" level=info msg="ImageCreate event name:\"sha256:e836e1dea560d4c477b347f1c93c245aec618361306b23eda1d6bb7665476182\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:28:39.622551 containerd[1486]: time="2026-04-17T23:28:39.622423230Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:28:39.624186 containerd[1486]: time="2026-04-17T23:28:39.623754990Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.31.4\" with image id \"sha256:e836e1dea560d4c477b347f1c93c245aec618361306b23eda1d6bb7665476182\", repo tag \"ghcr.io/flatcar/calico/typha:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\", size \"33865028\" in 2.10893492s" Apr 17 23:28:39.624186 containerd[1486]: time="2026-04-17T23:28:39.623797350Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\" returns image reference \"sha256:e836e1dea560d4c477b347f1c93c245aec618361306b23eda1d6bb7665476182\"" Apr 17 23:28:39.625730 containerd[1486]: time="2026-04-17T23:28:39.625506710Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\"" Apr 17 23:28:39.645021 containerd[1486]: time="2026-04-17T23:28:39.644978790Z" level=info msg="CreateContainer within sandbox \"b834a56d14fc971258693b600c1dae25937335bc41ba4cfa18e81969e45710eb\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Apr 17 23:28:39.671489 containerd[1486]: time="2026-04-17T23:28:39.671432590Z" level=info msg="CreateContainer within sandbox \"b834a56d14fc971258693b600c1dae25937335bc41ba4cfa18e81969e45710eb\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"86b102d00edb5e5ee7e491d9d72cbfcf8ccd4723681e485bc97392b3d393cb32\"" Apr 17 23:28:39.673778 containerd[1486]: time="2026-04-17T23:28:39.672398870Z" level=info msg="StartContainer for \"86b102d00edb5e5ee7e491d9d72cbfcf8ccd4723681e485bc97392b3d393cb32\"" Apr 17 23:28:39.708093 systemd[1]: Started cri-containerd-86b102d00edb5e5ee7e491d9d72cbfcf8ccd4723681e485bc97392b3d393cb32.scope - libcontainer container 86b102d00edb5e5ee7e491d9d72cbfcf8ccd4723681e485bc97392b3d393cb32. Apr 17 23:28:39.752261 containerd[1486]: time="2026-04-17T23:28:39.752208110Z" level=info msg="StartContainer for \"86b102d00edb5e5ee7e491d9d72cbfcf8ccd4723681e485bc97392b3d393cb32\" returns successfully" Apr 17 23:28:40.210816 kubelet[2529]: I0417 23:28:40.210474 2529 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-7f94b9b8c7-t7944" podStartSLOduration=1.09967467 podStartE2EDuration="3.21045279s" podCreationTimestamp="2026-04-17 23:28:37 +0000 UTC" firstStartedPulling="2026-04-17 23:28:37.51398571 +0000 UTC m=+22.586664001" lastFinishedPulling="2026-04-17 23:28:39.62476383 +0000 UTC m=+24.697442121" observedRunningTime="2026-04-17 23:28:40.19216355 +0000 UTC m=+25.264841921" watchObservedRunningTime="2026-04-17 23:28:40.21045279 +0000 UTC m=+25.283131081" Apr 17 23:28:40.224071 kubelet[2529]: E0417 23:28:40.224027 2529 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:28:40.224181 kubelet[2529]: W0417 23:28:40.224108 2529 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:28:40.224181 kubelet[2529]: E0417 23:28:40.224135 2529 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:28:40.225001 kubelet[2529]: E0417 23:28:40.224970 2529 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:28:40.225069 kubelet[2529]: W0417 23:28:40.224994 2529 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:28:40.225069 kubelet[2529]: E0417 23:28:40.225050 2529 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:28:40.225314 kubelet[2529]: E0417 23:28:40.225293 2529 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:28:40.225314 kubelet[2529]: W0417 23:28:40.225309 2529 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:28:40.225314 kubelet[2529]: E0417 23:28:40.225320 2529 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:28:40.225730 kubelet[2529]: E0417 23:28:40.225698 2529 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:28:40.225730 kubelet[2529]: W0417 23:28:40.225718 2529 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:28:40.225730 kubelet[2529]: E0417 23:28:40.225731 2529 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:28:40.226523 kubelet[2529]: E0417 23:28:40.226488 2529 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:28:40.226523 kubelet[2529]: W0417 23:28:40.226508 2529 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:28:40.226523 kubelet[2529]: E0417 23:28:40.226522 2529 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:28:40.227302 kubelet[2529]: E0417 23:28:40.227255 2529 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:28:40.227302 kubelet[2529]: W0417 23:28:40.227280 2529 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:28:40.227302 kubelet[2529]: E0417 23:28:40.227296 2529 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:28:40.227536 kubelet[2529]: E0417 23:28:40.227509 2529 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:28:40.227536 kubelet[2529]: W0417 23:28:40.227525 2529 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:28:40.227536 kubelet[2529]: E0417 23:28:40.227536 2529 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:28:40.228111 kubelet[2529]: E0417 23:28:40.228001 2529 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:28:40.228111 kubelet[2529]: W0417 23:28:40.228021 2529 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:28:40.228111 kubelet[2529]: E0417 23:28:40.228034 2529 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:28:40.229141 kubelet[2529]: E0417 23:28:40.229116 2529 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:28:40.229141 kubelet[2529]: W0417 23:28:40.229134 2529 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:28:40.229227 kubelet[2529]: E0417 23:28:40.229147 2529 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:28:40.229479 kubelet[2529]: E0417 23:28:40.229455 2529 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:28:40.229479 kubelet[2529]: W0417 23:28:40.229473 2529 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:28:40.229537 kubelet[2529]: E0417 23:28:40.229485 2529 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:28:40.230054 kubelet[2529]: E0417 23:28:40.230022 2529 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:28:40.230054 kubelet[2529]: W0417 23:28:40.230042 2529 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:28:40.230054 kubelet[2529]: E0417 23:28:40.230056 2529 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:28:40.232244 kubelet[2529]: E0417 23:28:40.232216 2529 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:28:40.232244 kubelet[2529]: W0417 23:28:40.232237 2529 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:28:40.232244 kubelet[2529]: E0417 23:28:40.232251 2529 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:28:40.232568 kubelet[2529]: E0417 23:28:40.232541 2529 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:28:40.232568 kubelet[2529]: W0417 23:28:40.232557 2529 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:28:40.232568 kubelet[2529]: E0417 23:28:40.232569 2529 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:28:40.232805 kubelet[2529]: E0417 23:28:40.232754 2529 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:28:40.232805 kubelet[2529]: W0417 23:28:40.232769 2529 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:28:40.232805 kubelet[2529]: E0417 23:28:40.232778 2529 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:28:40.232975 kubelet[2529]: E0417 23:28:40.232957 2529 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:28:40.232975 kubelet[2529]: W0417 23:28:40.232966 2529 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:28:40.232975 kubelet[2529]: E0417 23:28:40.232974 2529 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:28:40.247032 kubelet[2529]: E0417 23:28:40.246954 2529 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:28:40.247032 kubelet[2529]: W0417 23:28:40.247008 2529 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:28:40.247238 kubelet[2529]: E0417 23:28:40.247051 2529 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:28:40.247673 kubelet[2529]: E0417 23:28:40.247642 2529 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:28:40.247771 kubelet[2529]: W0417 23:28:40.247679 2529 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:28:40.247771 kubelet[2529]: E0417 23:28:40.247715 2529 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:28:40.248339 kubelet[2529]: E0417 23:28:40.248318 2529 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:28:40.248339 kubelet[2529]: W0417 23:28:40.248336 2529 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:28:40.248411 kubelet[2529]: E0417 23:28:40.248351 2529 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:28:40.248708 kubelet[2529]: E0417 23:28:40.248692 2529 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:28:40.248749 kubelet[2529]: W0417 23:28:40.248708 2529 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:28:40.248749 kubelet[2529]: E0417 23:28:40.248722 2529 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:28:40.249042 kubelet[2529]: E0417 23:28:40.249026 2529 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:28:40.249042 kubelet[2529]: W0417 23:28:40.249042 2529 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:28:40.249111 kubelet[2529]: E0417 23:28:40.249054 2529 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:28:40.249304 kubelet[2529]: E0417 23:28:40.249291 2529 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:28:40.249304 kubelet[2529]: W0417 23:28:40.249304 2529 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:28:40.249378 kubelet[2529]: E0417 23:28:40.249316 2529 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:28:40.250345 kubelet[2529]: E0417 23:28:40.250130 2529 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:28:40.250345 kubelet[2529]: W0417 23:28:40.250153 2529 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:28:40.250436 kubelet[2529]: E0417 23:28:40.250169 2529 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:28:40.251415 kubelet[2529]: E0417 23:28:40.251152 2529 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:28:40.251415 kubelet[2529]: W0417 23:28:40.251172 2529 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:28:40.251415 kubelet[2529]: E0417 23:28:40.251187 2529 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:28:40.251716 kubelet[2529]: E0417 23:28:40.251694 2529 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:28:40.251716 kubelet[2529]: W0417 23:28:40.251713 2529 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:28:40.251780 kubelet[2529]: E0417 23:28:40.251727 2529 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:28:40.252509 kubelet[2529]: E0417 23:28:40.252474 2529 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:28:40.252509 kubelet[2529]: W0417 23:28:40.252495 2529 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:28:40.252509 kubelet[2529]: E0417 23:28:40.252507 2529 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:28:40.252756 kubelet[2529]: E0417 23:28:40.252738 2529 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:28:40.252756 kubelet[2529]: W0417 23:28:40.252751 2529 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:28:40.252860 kubelet[2529]: E0417 23:28:40.252760 2529 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:28:40.253217 kubelet[2529]: E0417 23:28:40.253010 2529 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:28:40.253217 kubelet[2529]: W0417 23:28:40.253019 2529 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:28:40.253217 kubelet[2529]: E0417 23:28:40.253029 2529 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:28:40.253520 kubelet[2529]: E0417 23:28:40.253504 2529 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:28:40.253592 kubelet[2529]: W0417 23:28:40.253580 2529 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:28:40.253647 kubelet[2529]: E0417 23:28:40.253635 2529 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:28:40.254222 kubelet[2529]: E0417 23:28:40.254091 2529 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:28:40.254222 kubelet[2529]: W0417 23:28:40.254103 2529 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:28:40.254222 kubelet[2529]: E0417 23:28:40.254115 2529 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:28:40.254459 kubelet[2529]: E0417 23:28:40.254448 2529 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:28:40.254459 kubelet[2529]: W0417 23:28:40.254458 2529 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:28:40.254513 kubelet[2529]: E0417 23:28:40.254466 2529 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:28:40.254789 kubelet[2529]: E0417 23:28:40.254763 2529 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:28:40.254789 kubelet[2529]: W0417 23:28:40.254776 2529 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:28:40.254789 kubelet[2529]: E0417 23:28:40.254785 2529 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:28:40.255096 kubelet[2529]: E0417 23:28:40.255084 2529 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:28:40.255096 kubelet[2529]: W0417 23:28:40.255095 2529 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:28:40.255242 kubelet[2529]: E0417 23:28:40.255104 2529 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:28:40.255313 kubelet[2529]: E0417 23:28:40.255301 2529 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:28:40.255313 kubelet[2529]: W0417 23:28:40.255311 2529 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:28:40.255359 kubelet[2529]: E0417 23:28:40.255319 2529 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:28:41.064961 kubelet[2529]: E0417 23:28:41.064544 2529 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zxqb4" podUID="d8d9fd23-5b21-4171-9475-81c4da566eda" Apr 17 23:28:41.089190 containerd[1486]: time="2026-04-17T23:28:41.089128790Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:28:41.090474 containerd[1486]: time="2026-04-17T23:28:41.090424470Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4: active requests=0, bytes read=4457682" Apr 17 23:28:41.091986 containerd[1486]: time="2026-04-17T23:28:41.091957070Z" level=info msg="ImageCreate event name:\"sha256:449a6463eaa02e13b190ef7c4057191febcc65ab9418bae3bc0995f5bce65798\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:28:41.094979 containerd[1486]: time="2026-04-17T23:28:41.094942710Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:28:41.096124 containerd[1486]: time="2026-04-17T23:28:41.096089190Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" with image id \"sha256:449a6463eaa02e13b190ef7c4057191febcc65ab9418bae3bc0995f5bce65798\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\", size \"5855167\" in 1.47053716s" Apr 17 23:28:41.096214 containerd[1486]: time="2026-04-17T23:28:41.096130870Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" returns image reference \"sha256:449a6463eaa02e13b190ef7c4057191febcc65ab9418bae3bc0995f5bce65798\"" Apr 17 23:28:41.103484 containerd[1486]: time="2026-04-17T23:28:41.103441470Z" level=info msg="CreateContainer within sandbox \"1b88c1847746ce1f594c14d3c8b139d67bb5c1ef79d0f29ad46188790936a4c0\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Apr 17 23:28:41.124195 containerd[1486]: time="2026-04-17T23:28:41.124138110Z" level=info msg="CreateContainer within sandbox \"1b88c1847746ce1f594c14d3c8b139d67bb5c1ef79d0f29ad46188790936a4c0\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"621a69eb19ed06c9499f9c8b578dea46914a5e18ea9a37eecb760b72b8498df3\"" Apr 17 23:28:41.127087 containerd[1486]: time="2026-04-17T23:28:41.126939910Z" level=info msg="StartContainer for \"621a69eb19ed06c9499f9c8b578dea46914a5e18ea9a37eecb760b72b8498df3\"" Apr 17 23:28:41.158959 systemd[1]: run-containerd-runc-k8s.io-621a69eb19ed06c9499f9c8b578dea46914a5e18ea9a37eecb760b72b8498df3-runc.mKRdX4.mount: Deactivated successfully. Apr 17 23:28:41.168118 systemd[1]: Started cri-containerd-621a69eb19ed06c9499f9c8b578dea46914a5e18ea9a37eecb760b72b8498df3.scope - libcontainer container 621a69eb19ed06c9499f9c8b578dea46914a5e18ea9a37eecb760b72b8498df3. Apr 17 23:28:41.201930 containerd[1486]: time="2026-04-17T23:28:41.201806430Z" level=info msg="StartContainer for \"621a69eb19ed06c9499f9c8b578dea46914a5e18ea9a37eecb760b72b8498df3\" returns successfully" Apr 17 23:28:41.221375 systemd[1]: cri-containerd-621a69eb19ed06c9499f9c8b578dea46914a5e18ea9a37eecb760b72b8498df3.scope: Deactivated successfully. Apr 17 23:28:41.355061 containerd[1486]: time="2026-04-17T23:28:41.354760550Z" level=info msg="shim disconnected" id=621a69eb19ed06c9499f9c8b578dea46914a5e18ea9a37eecb760b72b8498df3 namespace=k8s.io Apr 17 23:28:41.355061 containerd[1486]: time="2026-04-17T23:28:41.354909870Z" level=warning msg="cleaning up after shim disconnected" id=621a69eb19ed06c9499f9c8b578dea46914a5e18ea9a37eecb760b72b8498df3 namespace=k8s.io Apr 17 23:28:41.355061 containerd[1486]: time="2026-04-17T23:28:41.354927950Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 17 23:28:42.117556 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-621a69eb19ed06c9499f9c8b578dea46914a5e18ea9a37eecb760b72b8498df3-rootfs.mount: Deactivated successfully. Apr 17 23:28:42.186535 containerd[1486]: time="2026-04-17T23:28:42.186262990Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\"" Apr 17 23:28:43.067167 kubelet[2529]: E0417 23:28:43.067120 2529 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zxqb4" podUID="d8d9fd23-5b21-4171-9475-81c4da566eda" Apr 17 23:28:45.066545 kubelet[2529]: E0417 23:28:45.066496 2529 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zxqb4" podUID="d8d9fd23-5b21-4171-9475-81c4da566eda" Apr 17 23:28:46.685500 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount636913609.mount: Deactivated successfully. Apr 17 23:28:46.715205 containerd[1486]: time="2026-04-17T23:28:46.715135350Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:28:46.716570 containerd[1486]: time="2026-04-17T23:28:46.716530270Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.31.4: active requests=0, bytes read=153921674" Apr 17 23:28:46.717463 containerd[1486]: time="2026-04-17T23:28:46.717412630Z" level=info msg="ImageCreate event name:\"sha256:27be54f2b9e47d96c7e9e5ad16e26ec298c1829f31885c81a622d50472c8ac97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:28:46.721690 containerd[1486]: time="2026-04-17T23:28:46.721097110Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:28:46.721690 containerd[1486]: time="2026-04-17T23:28:46.721543830Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.31.4\" with image id \"sha256:27be54f2b9e47d96c7e9e5ad16e26ec298c1829f31885c81a622d50472c8ac97\", repo tag \"ghcr.io/flatcar/calico/node:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\", size \"153921536\" in 4.5352234s" Apr 17 23:28:46.721690 containerd[1486]: time="2026-04-17T23:28:46.721579190Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\" returns image reference \"sha256:27be54f2b9e47d96c7e9e5ad16e26ec298c1829f31885c81a622d50472c8ac97\"" Apr 17 23:28:46.733164 containerd[1486]: time="2026-04-17T23:28:46.733111550Z" level=info msg="CreateContainer within sandbox \"1b88c1847746ce1f594c14d3c8b139d67bb5c1ef79d0f29ad46188790936a4c0\" for container &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,}" Apr 17 23:28:46.756630 containerd[1486]: time="2026-04-17T23:28:46.756541590Z" level=info msg="CreateContainer within sandbox \"1b88c1847746ce1f594c14d3c8b139d67bb5c1ef79d0f29ad46188790936a4c0\" for &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,} returns container id \"9c2e8ce4072f2e32bc777730d055fbbe80bd99cb50450ef38499f9219228d6c7\"" Apr 17 23:28:46.759807 containerd[1486]: time="2026-04-17T23:28:46.759297270Z" level=info msg="StartContainer for \"9c2e8ce4072f2e32bc777730d055fbbe80bd99cb50450ef38499f9219228d6c7\"" Apr 17 23:28:46.800299 systemd[1]: Started cri-containerd-9c2e8ce4072f2e32bc777730d055fbbe80bd99cb50450ef38499f9219228d6c7.scope - libcontainer container 9c2e8ce4072f2e32bc777730d055fbbe80bd99cb50450ef38499f9219228d6c7. Apr 17 23:28:46.840289 containerd[1486]: time="2026-04-17T23:28:46.840209910Z" level=info msg="StartContainer for \"9c2e8ce4072f2e32bc777730d055fbbe80bd99cb50450ef38499f9219228d6c7\" returns successfully" Apr 17 23:28:46.944595 systemd[1]: cri-containerd-9c2e8ce4072f2e32bc777730d055fbbe80bd99cb50450ef38499f9219228d6c7.scope: Deactivated successfully. Apr 17 23:28:47.065472 kubelet[2529]: E0417 23:28:47.065276 2529 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zxqb4" podUID="d8d9fd23-5b21-4171-9475-81c4da566eda" Apr 17 23:28:47.127818 containerd[1486]: time="2026-04-17T23:28:47.127596950Z" level=info msg="shim disconnected" id=9c2e8ce4072f2e32bc777730d055fbbe80bd99cb50450ef38499f9219228d6c7 namespace=k8s.io Apr 17 23:28:47.127818 containerd[1486]: time="2026-04-17T23:28:47.127653790Z" level=warning msg="cleaning up after shim disconnected" id=9c2e8ce4072f2e32bc777730d055fbbe80bd99cb50450ef38499f9219228d6c7 namespace=k8s.io Apr 17 23:28:47.127818 containerd[1486]: time="2026-04-17T23:28:47.127663150Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 17 23:28:47.140153 containerd[1486]: time="2026-04-17T23:28:47.139118950Z" level=warning msg="cleanup warnings time=\"2026-04-17T23:28:47Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Apr 17 23:28:47.215429 containerd[1486]: time="2026-04-17T23:28:47.211050030Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\"" Apr 17 23:28:47.686658 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9c2e8ce4072f2e32bc777730d055fbbe80bd99cb50450ef38499f9219228d6c7-rootfs.mount: Deactivated successfully. Apr 17 23:28:49.065483 kubelet[2529]: E0417 23:28:49.065104 2529 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zxqb4" podUID="d8d9fd23-5b21-4171-9475-81c4da566eda" Apr 17 23:28:49.728202 containerd[1486]: time="2026-04-17T23:28:49.728145390Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:28:49.729306 containerd[1486]: time="2026-04-17T23:28:49.729112750Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.31.4: active requests=0, bytes read=66009216" Apr 17 23:28:49.730379 containerd[1486]: time="2026-04-17T23:28:49.730032030Z" level=info msg="ImageCreate event name:\"sha256:c10bed152367fad8c19e9400f12b748d6fbc20498086983df13e70e36f24511b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:28:49.732582 containerd[1486]: time="2026-04-17T23:28:49.732538790Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:28:49.733622 containerd[1486]: time="2026-04-17T23:28:49.733588310Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.31.4\" with image id \"sha256:c10bed152367fad8c19e9400f12b748d6fbc20498086983df13e70e36f24511b\", repo tag \"ghcr.io/flatcar/calico/cni:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\", size \"67406741\" in 2.52249844s" Apr 17 23:28:49.733622 containerd[1486]: time="2026-04-17T23:28:49.733622110Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\" returns image reference \"sha256:c10bed152367fad8c19e9400f12b748d6fbc20498086983df13e70e36f24511b\"" Apr 17 23:28:49.739408 containerd[1486]: time="2026-04-17T23:28:49.739371350Z" level=info msg="CreateContainer within sandbox \"1b88c1847746ce1f594c14d3c8b139d67bb5c1ef79d0f29ad46188790936a4c0\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Apr 17 23:28:49.760322 containerd[1486]: time="2026-04-17T23:28:49.760206670Z" level=info msg="CreateContainer within sandbox \"1b88c1847746ce1f594c14d3c8b139d67bb5c1ef79d0f29ad46188790936a4c0\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"960a6f36d8ca85827123754c2374fb95306574dffedb125b6e75737e3dfbc7b2\"" Apr 17 23:28:49.762113 containerd[1486]: time="2026-04-17T23:28:49.762041150Z" level=info msg="StartContainer for \"960a6f36d8ca85827123754c2374fb95306574dffedb125b6e75737e3dfbc7b2\"" Apr 17 23:28:49.798045 systemd[1]: Started cri-containerd-960a6f36d8ca85827123754c2374fb95306574dffedb125b6e75737e3dfbc7b2.scope - libcontainer container 960a6f36d8ca85827123754c2374fb95306574dffedb125b6e75737e3dfbc7b2. Apr 17 23:28:49.832334 containerd[1486]: time="2026-04-17T23:28:49.832017630Z" level=info msg="StartContainer for \"960a6f36d8ca85827123754c2374fb95306574dffedb125b6e75737e3dfbc7b2\" returns successfully" Apr 17 23:28:50.408131 containerd[1486]: time="2026-04-17T23:28:50.408076950Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 17 23:28:50.411762 systemd[1]: cri-containerd-960a6f36d8ca85827123754c2374fb95306574dffedb125b6e75737e3dfbc7b2.scope: Deactivated successfully. Apr 17 23:28:50.434202 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-960a6f36d8ca85827123754c2374fb95306574dffedb125b6e75737e3dfbc7b2-rootfs.mount: Deactivated successfully. Apr 17 23:28:50.436700 kubelet[2529]: I0417 23:28:50.435804 2529 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Apr 17 23:28:50.524259 containerd[1486]: time="2026-04-17T23:28:50.524192150Z" level=info msg="shim disconnected" id=960a6f36d8ca85827123754c2374fb95306574dffedb125b6e75737e3dfbc7b2 namespace=k8s.io Apr 17 23:28:50.524954 kubelet[2529]: I0417 23:28:50.524579 2529 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6916156d-2f5c-4714-9bd0-3f37fd1a0dc3-config-volume\") pod \"coredns-66bc5c9577-hjmd8\" (UID: \"6916156d-2f5c-4714-9bd0-3f37fd1a0dc3\") " pod="kube-system/coredns-66bc5c9577-hjmd8" Apr 17 23:28:50.524954 kubelet[2529]: I0417 23:28:50.524622 2529 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t9dvg\" (UniqueName: \"kubernetes.io/projected/6916156d-2f5c-4714-9bd0-3f37fd1a0dc3-kube-api-access-t9dvg\") pod \"coredns-66bc5c9577-hjmd8\" (UID: \"6916156d-2f5c-4714-9bd0-3f37fd1a0dc3\") " pod="kube-system/coredns-66bc5c9577-hjmd8" Apr 17 23:28:50.525086 containerd[1486]: time="2026-04-17T23:28:50.524736350Z" level=warning msg="cleaning up after shim disconnected" id=960a6f36d8ca85827123754c2374fb95306574dffedb125b6e75737e3dfbc7b2 namespace=k8s.io Apr 17 23:28:50.525086 containerd[1486]: time="2026-04-17T23:28:50.524758510Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 17 23:28:50.530670 systemd[1]: Created slice kubepods-burstable-pod6916156d_2f5c_4714_9bd0_3f37fd1a0dc3.slice - libcontainer container kubepods-burstable-pod6916156d_2f5c_4714_9bd0_3f37fd1a0dc3.slice. Apr 17 23:28:50.551161 systemd[1]: Created slice kubepods-burstable-podd866b1df_e265_46c7_a073_f92742fbb2a8.slice - libcontainer container kubepods-burstable-podd866b1df_e265_46c7_a073_f92742fbb2a8.slice. Apr 17 23:28:50.562934 systemd[1]: Created slice kubepods-besteffort-pod48917d2d_23fd_499e_a8bf_6a46380b5f6d.slice - libcontainer container kubepods-besteffort-pod48917d2d_23fd_499e_a8bf_6a46380b5f6d.slice. Apr 17 23:28:50.573958 systemd[1]: Created slice kubepods-besteffort-podce1c3055_4d37_4bc7_b4f9_51d0ad58aa3c.slice - libcontainer container kubepods-besteffort-podce1c3055_4d37_4bc7_b4f9_51d0ad58aa3c.slice. Apr 17 23:28:50.583249 systemd[1]: Created slice kubepods-besteffort-pod6bc2b360_2bde_490c_9433_beeae81acf45.slice - libcontainer container kubepods-besteffort-pod6bc2b360_2bde_490c_9433_beeae81acf45.slice. Apr 17 23:28:50.595447 systemd[1]: Created slice kubepods-besteffort-pod8ec0d86c_9bed_4b8a_bac2_d5b0be9c767e.slice - libcontainer container kubepods-besteffort-pod8ec0d86c_9bed_4b8a_bac2_d5b0be9c767e.slice. Apr 17 23:28:50.601254 systemd[1]: Created slice kubepods-besteffort-pod9fdd0055_33f6_4661_b37f_fdeacc1cf39d.slice - libcontainer container kubepods-besteffort-pod9fdd0055_33f6_4661_b37f_fdeacc1cf39d.slice. Apr 17 23:28:50.624948 kubelet[2529]: I0417 23:28:50.624908 2529 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8ec0d86c-9bed-4b8a-bac2-d5b0be9c767e-goldmane-ca-bundle\") pod \"goldmane-cccfbd5cf-kds4v\" (UID: \"8ec0d86c-9bed-4b8a-bac2-d5b0be9c767e\") " pod="calico-system/goldmane-cccfbd5cf-kds4v" Apr 17 23:28:50.625259 kubelet[2529]: I0417 23:28:50.625128 2529 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bj8g9\" (UniqueName: \"kubernetes.io/projected/48917d2d-23fd-499e-a8bf-6a46380b5f6d-kube-api-access-bj8g9\") pod \"calico-kube-controllers-6cfcc84fc9-h7qh6\" (UID: \"48917d2d-23fd-499e-a8bf-6a46380b5f6d\") " pod="calico-system/calico-kube-controllers-6cfcc84fc9-h7qh6" Apr 17 23:28:50.625259 kubelet[2529]: I0417 23:28:50.625154 2529 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6bc2b360-2bde-490c-9433-beeae81acf45-whisker-ca-bundle\") pod \"whisker-68d6dbbfb6-zsktw\" (UID: \"6bc2b360-2bde-490c-9433-beeae81acf45\") " pod="calico-system/whisker-68d6dbbfb6-zsktw" Apr 17 23:28:50.625259 kubelet[2529]: I0417 23:28:50.625207 2529 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/8ec0d86c-9bed-4b8a-bac2-d5b0be9c767e-goldmane-key-pair\") pod \"goldmane-cccfbd5cf-kds4v\" (UID: \"8ec0d86c-9bed-4b8a-bac2-d5b0be9c767e\") " pod="calico-system/goldmane-cccfbd5cf-kds4v" Apr 17 23:28:50.625259 kubelet[2529]: I0417 23:28:50.625225 2529 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/ce1c3055-4d37-4bc7-b4f9-51d0ad58aa3c-calico-apiserver-certs\") pod \"calico-apiserver-5865bd758-tk57g\" (UID: \"ce1c3055-4d37-4bc7-b4f9-51d0ad58aa3c\") " pod="calico-system/calico-apiserver-5865bd758-tk57g" Apr 17 23:28:50.625384 kubelet[2529]: I0417 23:28:50.625279 2529 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d866b1df-e265-46c7-a073-f92742fbb2a8-config-volume\") pod \"coredns-66bc5c9577-tkqp6\" (UID: \"d866b1df-e265-46c7-a073-f92742fbb2a8\") " pod="kube-system/coredns-66bc5c9577-tkqp6" Apr 17 23:28:50.625384 kubelet[2529]: I0417 23:28:50.625346 2529 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k9mvm\" (UniqueName: \"kubernetes.io/projected/9fdd0055-33f6-4661-b37f-fdeacc1cf39d-kube-api-access-k9mvm\") pod \"calico-apiserver-5865bd758-vdph8\" (UID: \"9fdd0055-33f6-4661-b37f-fdeacc1cf39d\") " pod="calico-system/calico-apiserver-5865bd758-vdph8" Apr 17 23:28:50.625384 kubelet[2529]: I0417 23:28:50.625379 2529 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tqlxt\" (UniqueName: \"kubernetes.io/projected/ce1c3055-4d37-4bc7-b4f9-51d0ad58aa3c-kube-api-access-tqlxt\") pod \"calico-apiserver-5865bd758-tk57g\" (UID: \"ce1c3055-4d37-4bc7-b4f9-51d0ad58aa3c\") " pod="calico-system/calico-apiserver-5865bd758-tk57g" Apr 17 23:28:50.625455 kubelet[2529]: I0417 23:28:50.625402 2529 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/6bc2b360-2bde-490c-9433-beeae81acf45-nginx-config\") pod \"whisker-68d6dbbfb6-zsktw\" (UID: \"6bc2b360-2bde-490c-9433-beeae81acf45\") " pod="calico-system/whisker-68d6dbbfb6-zsktw" Apr 17 23:28:50.625480 kubelet[2529]: I0417 23:28:50.625454 2529 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/9fdd0055-33f6-4661-b37f-fdeacc1cf39d-calico-apiserver-certs\") pod \"calico-apiserver-5865bd758-vdph8\" (UID: \"9fdd0055-33f6-4661-b37f-fdeacc1cf39d\") " pod="calico-system/calico-apiserver-5865bd758-vdph8" Apr 17 23:28:50.625503 kubelet[2529]: I0417 23:28:50.625477 2529 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/6bc2b360-2bde-490c-9433-beeae81acf45-whisker-backend-key-pair\") pod \"whisker-68d6dbbfb6-zsktw\" (UID: \"6bc2b360-2bde-490c-9433-beeae81acf45\") " pod="calico-system/whisker-68d6dbbfb6-zsktw" Apr 17 23:28:50.625503 kubelet[2529]: I0417 23:28:50.625492 2529 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-48mws\" (UniqueName: \"kubernetes.io/projected/d866b1df-e265-46c7-a073-f92742fbb2a8-kube-api-access-48mws\") pod \"coredns-66bc5c9577-tkqp6\" (UID: \"d866b1df-e265-46c7-a073-f92742fbb2a8\") " pod="kube-system/coredns-66bc5c9577-tkqp6" Apr 17 23:28:50.626906 kubelet[2529]: I0417 23:28:50.625553 2529 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v2djk\" (UniqueName: \"kubernetes.io/projected/8ec0d86c-9bed-4b8a-bac2-d5b0be9c767e-kube-api-access-v2djk\") pod \"goldmane-cccfbd5cf-kds4v\" (UID: \"8ec0d86c-9bed-4b8a-bac2-d5b0be9c767e\") " pod="calico-system/goldmane-cccfbd5cf-kds4v" Apr 17 23:28:50.626906 kubelet[2529]: I0417 23:28:50.625954 2529 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8ec0d86c-9bed-4b8a-bac2-d5b0be9c767e-config\") pod \"goldmane-cccfbd5cf-kds4v\" (UID: \"8ec0d86c-9bed-4b8a-bac2-d5b0be9c767e\") " pod="calico-system/goldmane-cccfbd5cf-kds4v" Apr 17 23:28:50.626906 kubelet[2529]: I0417 23:28:50.625990 2529 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/48917d2d-23fd-499e-a8bf-6a46380b5f6d-tigera-ca-bundle\") pod \"calico-kube-controllers-6cfcc84fc9-h7qh6\" (UID: \"48917d2d-23fd-499e-a8bf-6a46380b5f6d\") " pod="calico-system/calico-kube-controllers-6cfcc84fc9-h7qh6" Apr 17 23:28:50.626906 kubelet[2529]: I0417 23:28:50.626010 2529 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fcgkv\" (UniqueName: \"kubernetes.io/projected/6bc2b360-2bde-490c-9433-beeae81acf45-kube-api-access-fcgkv\") pod \"whisker-68d6dbbfb6-zsktw\" (UID: \"6bc2b360-2bde-490c-9433-beeae81acf45\") " pod="calico-system/whisker-68d6dbbfb6-zsktw" Apr 17 23:28:50.847470 containerd[1486]: time="2026-04-17T23:28:50.847409270Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-hjmd8,Uid:6916156d-2f5c-4714-9bd0-3f37fd1a0dc3,Namespace:kube-system,Attempt:0,}" Apr 17 23:28:50.863215 containerd[1486]: time="2026-04-17T23:28:50.861847510Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-tkqp6,Uid:d866b1df-e265-46c7-a073-f92742fbb2a8,Namespace:kube-system,Attempt:0,}" Apr 17 23:28:50.874436 containerd[1486]: time="2026-04-17T23:28:50.874375070Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6cfcc84fc9-h7qh6,Uid:48917d2d-23fd-499e-a8bf-6a46380b5f6d,Namespace:calico-system,Attempt:0,}" Apr 17 23:28:50.880550 containerd[1486]: time="2026-04-17T23:28:50.880513670Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5865bd758-tk57g,Uid:ce1c3055-4d37-4bc7-b4f9-51d0ad58aa3c,Namespace:calico-system,Attempt:0,}" Apr 17 23:28:50.891852 containerd[1486]: time="2026-04-17T23:28:50.891771830Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-68d6dbbfb6-zsktw,Uid:6bc2b360-2bde-490c-9433-beeae81acf45,Namespace:calico-system,Attempt:0,}" Apr 17 23:28:50.903336 containerd[1486]: time="2026-04-17T23:28:50.903270470Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-cccfbd5cf-kds4v,Uid:8ec0d86c-9bed-4b8a-bac2-d5b0be9c767e,Namespace:calico-system,Attempt:0,}" Apr 17 23:28:50.909009 containerd[1486]: time="2026-04-17T23:28:50.908955550Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5865bd758-vdph8,Uid:9fdd0055-33f6-4661-b37f-fdeacc1cf39d,Namespace:calico-system,Attempt:0,}" Apr 17 23:28:51.089312 systemd[1]: Created slice kubepods-besteffort-podd8d9fd23_5b21_4171_9475_81c4da566eda.slice - libcontainer container kubepods-besteffort-podd8d9fd23_5b21_4171_9475_81c4da566eda.slice. Apr 17 23:28:51.100781 containerd[1486]: time="2026-04-17T23:28:51.100158310Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zxqb4,Uid:d8d9fd23-5b21-4171-9475-81c4da566eda,Namespace:calico-system,Attempt:0,}" Apr 17 23:28:51.118705 containerd[1486]: time="2026-04-17T23:28:51.118103430Z" level=error msg="Failed to destroy network for sandbox \"9aab4a498f1be21cc6416e812e1a44a9bed0855cec481f73093f0b79f0d4a0f0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:28:51.120506 containerd[1486]: time="2026-04-17T23:28:51.120418310Z" level=error msg="encountered an error cleaning up failed sandbox \"9aab4a498f1be21cc6416e812e1a44a9bed0855cec481f73093f0b79f0d4a0f0\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:28:51.121741 containerd[1486]: time="2026-04-17T23:28:51.121710150Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-hjmd8,Uid:6916156d-2f5c-4714-9bd0-3f37fd1a0dc3,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"9aab4a498f1be21cc6416e812e1a44a9bed0855cec481f73093f0b79f0d4a0f0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:28:51.126011 kubelet[2529]: E0417 23:28:51.125950 2529 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9aab4a498f1be21cc6416e812e1a44a9bed0855cec481f73093f0b79f0d4a0f0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:28:51.126222 kubelet[2529]: E0417 23:28:51.126103 2529 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9aab4a498f1be21cc6416e812e1a44a9bed0855cec481f73093f0b79f0d4a0f0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-hjmd8" Apr 17 23:28:51.126222 kubelet[2529]: E0417 23:28:51.126126 2529 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9aab4a498f1be21cc6416e812e1a44a9bed0855cec481f73093f0b79f0d4a0f0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-hjmd8" Apr 17 23:28:51.126518 kubelet[2529]: E0417 23:28:51.126342 2529 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-hjmd8_kube-system(6916156d-2f5c-4714-9bd0-3f37fd1a0dc3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-hjmd8_kube-system(6916156d-2f5c-4714-9bd0-3f37fd1a0dc3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9aab4a498f1be21cc6416e812e1a44a9bed0855cec481f73093f0b79f0d4a0f0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-hjmd8" podUID="6916156d-2f5c-4714-9bd0-3f37fd1a0dc3" Apr 17 23:28:51.196315 containerd[1486]: time="2026-04-17T23:28:51.196236190Z" level=error msg="Failed to destroy network for sandbox \"b3080180d8d38ae2028ed71916812eeb03cf94b645207feb24a50cf88f46f1ce\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:28:51.196907 containerd[1486]: time="2026-04-17T23:28:51.196844070Z" level=error msg="encountered an error cleaning up failed sandbox \"b3080180d8d38ae2028ed71916812eeb03cf94b645207feb24a50cf88f46f1ce\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:28:51.197168 containerd[1486]: time="2026-04-17T23:28:51.197046270Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-tkqp6,Uid:d866b1df-e265-46c7-a073-f92742fbb2a8,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b3080180d8d38ae2028ed71916812eeb03cf94b645207feb24a50cf88f46f1ce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:28:51.197889 kubelet[2529]: E0417 23:28:51.197605 2529 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b3080180d8d38ae2028ed71916812eeb03cf94b645207feb24a50cf88f46f1ce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:28:51.197889 kubelet[2529]: E0417 23:28:51.197668 2529 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b3080180d8d38ae2028ed71916812eeb03cf94b645207feb24a50cf88f46f1ce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-tkqp6" Apr 17 23:28:51.197889 kubelet[2529]: E0417 23:28:51.197690 2529 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b3080180d8d38ae2028ed71916812eeb03cf94b645207feb24a50cf88f46f1ce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-tkqp6" Apr 17 23:28:51.198027 kubelet[2529]: E0417 23:28:51.197757 2529 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-tkqp6_kube-system(d866b1df-e265-46c7-a073-f92742fbb2a8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-tkqp6_kube-system(d866b1df-e265-46c7-a073-f92742fbb2a8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b3080180d8d38ae2028ed71916812eeb03cf94b645207feb24a50cf88f46f1ce\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-tkqp6" podUID="d866b1df-e265-46c7-a073-f92742fbb2a8" Apr 17 23:28:51.204815 containerd[1486]: time="2026-04-17T23:28:51.204731430Z" level=error msg="Failed to destroy network for sandbox \"96dea844bc0b8d926120d6c8634e879d57072b5e43e47b04ebf3c1c9792c60bc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:28:51.206010 containerd[1486]: time="2026-04-17T23:28:51.205783030Z" level=error msg="encountered an error cleaning up failed sandbox \"96dea844bc0b8d926120d6c8634e879d57072b5e43e47b04ebf3c1c9792c60bc\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:28:51.206153 containerd[1486]: time="2026-04-17T23:28:51.206123910Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-cccfbd5cf-kds4v,Uid:8ec0d86c-9bed-4b8a-bac2-d5b0be9c767e,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"96dea844bc0b8d926120d6c8634e879d57072b5e43e47b04ebf3c1c9792c60bc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:28:51.207018 kubelet[2529]: E0417 23:28:51.206842 2529 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"96dea844bc0b8d926120d6c8634e879d57072b5e43e47b04ebf3c1c9792c60bc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:28:51.207018 kubelet[2529]: E0417 23:28:51.206979 2529 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"96dea844bc0b8d926120d6c8634e879d57072b5e43e47b04ebf3c1c9792c60bc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-cccfbd5cf-kds4v" Apr 17 23:28:51.207505 kubelet[2529]: E0417 23:28:51.207261 2529 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"96dea844bc0b8d926120d6c8634e879d57072b5e43e47b04ebf3c1c9792c60bc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-cccfbd5cf-kds4v" Apr 17 23:28:51.207505 kubelet[2529]: E0417 23:28:51.207429 2529 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-cccfbd5cf-kds4v_calico-system(8ec0d86c-9bed-4b8a-bac2-d5b0be9c767e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-cccfbd5cf-kds4v_calico-system(8ec0d86c-9bed-4b8a-bac2-d5b0be9c767e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"96dea844bc0b8d926120d6c8634e879d57072b5e43e47b04ebf3c1c9792c60bc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-cccfbd5cf-kds4v" podUID="8ec0d86c-9bed-4b8a-bac2-d5b0be9c767e" Apr 17 23:28:51.226197 containerd[1486]: time="2026-04-17T23:28:51.225883870Z" level=error msg="Failed to destroy network for sandbox \"0505aa1b617d0a50122e35b64b7ef4bfa35ffa96bfb8f767a87a3b97825b771a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:28:51.226322 containerd[1486]: time="2026-04-17T23:28:51.226230030Z" level=error msg="encountered an error cleaning up failed sandbox \"0505aa1b617d0a50122e35b64b7ef4bfa35ffa96bfb8f767a87a3b97825b771a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:28:51.226322 containerd[1486]: time="2026-04-17T23:28:51.226276350Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6cfcc84fc9-h7qh6,Uid:48917d2d-23fd-499e-a8bf-6a46380b5f6d,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"0505aa1b617d0a50122e35b64b7ef4bfa35ffa96bfb8f767a87a3b97825b771a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:28:51.227052 kubelet[2529]: E0417 23:28:51.226521 2529 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0505aa1b617d0a50122e35b64b7ef4bfa35ffa96bfb8f767a87a3b97825b771a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:28:51.227052 kubelet[2529]: E0417 23:28:51.226574 2529 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0505aa1b617d0a50122e35b64b7ef4bfa35ffa96bfb8f767a87a3b97825b771a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6cfcc84fc9-h7qh6" Apr 17 23:28:51.227052 kubelet[2529]: E0417 23:28:51.226608 2529 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0505aa1b617d0a50122e35b64b7ef4bfa35ffa96bfb8f767a87a3b97825b771a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6cfcc84fc9-h7qh6" Apr 17 23:28:51.227171 kubelet[2529]: E0417 23:28:51.226665 2529 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6cfcc84fc9-h7qh6_calico-system(48917d2d-23fd-499e-a8bf-6a46380b5f6d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6cfcc84fc9-h7qh6_calico-system(48917d2d-23fd-499e-a8bf-6a46380b5f6d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0505aa1b617d0a50122e35b64b7ef4bfa35ffa96bfb8f767a87a3b97825b771a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6cfcc84fc9-h7qh6" podUID="48917d2d-23fd-499e-a8bf-6a46380b5f6d" Apr 17 23:28:51.231358 containerd[1486]: time="2026-04-17T23:28:51.231313750Z" level=error msg="Failed to destroy network for sandbox \"09c200f32e1a0cdf9974950a9011ba1d94c3d018440639feffee8d28e0b2d7f8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:28:51.233537 containerd[1486]: time="2026-04-17T23:28:51.232771990Z" level=error msg="encountered an error cleaning up failed sandbox \"09c200f32e1a0cdf9974950a9011ba1d94c3d018440639feffee8d28e0b2d7f8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:28:51.233537 containerd[1486]: time="2026-04-17T23:28:51.233106870Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5865bd758-tk57g,Uid:ce1c3055-4d37-4bc7-b4f9-51d0ad58aa3c,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"09c200f32e1a0cdf9974950a9011ba1d94c3d018440639feffee8d28e0b2d7f8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:28:51.233712 kubelet[2529]: E0417 23:28:51.233380 2529 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"09c200f32e1a0cdf9974950a9011ba1d94c3d018440639feffee8d28e0b2d7f8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:28:51.233712 kubelet[2529]: E0417 23:28:51.233422 2529 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"09c200f32e1a0cdf9974950a9011ba1d94c3d018440639feffee8d28e0b2d7f8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-5865bd758-tk57g" Apr 17 23:28:51.233712 kubelet[2529]: E0417 23:28:51.233441 2529 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"09c200f32e1a0cdf9974950a9011ba1d94c3d018440639feffee8d28e0b2d7f8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-5865bd758-tk57g" Apr 17 23:28:51.233800 kubelet[2529]: E0417 23:28:51.233489 2529 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5865bd758-tk57g_calico-system(ce1c3055-4d37-4bc7-b4f9-51d0ad58aa3c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5865bd758-tk57g_calico-system(ce1c3055-4d37-4bc7-b4f9-51d0ad58aa3c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"09c200f32e1a0cdf9974950a9011ba1d94c3d018440639feffee8d28e0b2d7f8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-5865bd758-tk57g" podUID="ce1c3055-4d37-4bc7-b4f9-51d0ad58aa3c" Apr 17 23:28:51.244332 containerd[1486]: time="2026-04-17T23:28:51.243635470Z" level=error msg="Failed to destroy network for sandbox \"bc473a3d31ecf60c941597f3bb3fada00e6673948a5dc3878e48a11f8fc8f26b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:28:51.244332 containerd[1486]: time="2026-04-17T23:28:51.244099870Z" level=error msg="encountered an error cleaning up failed sandbox \"bc473a3d31ecf60c941597f3bb3fada00e6673948a5dc3878e48a11f8fc8f26b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:28:51.244332 containerd[1486]: time="2026-04-17T23:28:51.244273190Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-68d6dbbfb6-zsktw,Uid:6bc2b360-2bde-490c-9433-beeae81acf45,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"bc473a3d31ecf60c941597f3bb3fada00e6673948a5dc3878e48a11f8fc8f26b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:28:51.244993 kubelet[2529]: E0417 23:28:51.244955 2529 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bc473a3d31ecf60c941597f3bb3fada00e6673948a5dc3878e48a11f8fc8f26b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:28:51.245580 kubelet[2529]: E0417 23:28:51.245228 2529 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bc473a3d31ecf60c941597f3bb3fada00e6673948a5dc3878e48a11f8fc8f26b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-68d6dbbfb6-zsktw" Apr 17 23:28:51.245580 kubelet[2529]: E0417 23:28:51.245513 2529 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bc473a3d31ecf60c941597f3bb3fada00e6673948a5dc3878e48a11f8fc8f26b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-68d6dbbfb6-zsktw" Apr 17 23:28:51.245949 kubelet[2529]: E0417 23:28:51.245842 2529 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-68d6dbbfb6-zsktw_calico-system(6bc2b360-2bde-490c-9433-beeae81acf45)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-68d6dbbfb6-zsktw_calico-system(6bc2b360-2bde-490c-9433-beeae81acf45)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bc473a3d31ecf60c941597f3bb3fada00e6673948a5dc3878e48a11f8fc8f26b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-68d6dbbfb6-zsktw" podUID="6bc2b360-2bde-490c-9433-beeae81acf45" Apr 17 23:28:51.249833 kubelet[2529]: I0417 23:28:51.249699 2529 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="96dea844bc0b8d926120d6c8634e879d57072b5e43e47b04ebf3c1c9792c60bc" Apr 17 23:28:51.251694 containerd[1486]: time="2026-04-17T23:28:51.251584310Z" level=info msg="StopPodSandbox for \"96dea844bc0b8d926120d6c8634e879d57072b5e43e47b04ebf3c1c9792c60bc\"" Apr 17 23:28:51.251896 containerd[1486]: time="2026-04-17T23:28:51.251826270Z" level=info msg="Ensure that sandbox 96dea844bc0b8d926120d6c8634e879d57072b5e43e47b04ebf3c1c9792c60bc in task-service has been cleanup successfully" Apr 17 23:28:51.269695 containerd[1486]: time="2026-04-17T23:28:51.269638390Z" level=info msg="CreateContainer within sandbox \"1b88c1847746ce1f594c14d3c8b139d67bb5c1ef79d0f29ad46188790936a4c0\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Apr 17 23:28:51.281749 kubelet[2529]: I0417 23:28:51.280686 2529 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0505aa1b617d0a50122e35b64b7ef4bfa35ffa96bfb8f767a87a3b97825b771a" Apr 17 23:28:51.281958 containerd[1486]: time="2026-04-17T23:28:51.281677630Z" level=error msg="Failed to destroy network for sandbox \"87f9d005bc40d4ccb69ca40d4a5906ccd13ade6b13dc2a2a5a3d30d36a11221b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:28:51.284248 containerd[1486]: time="2026-04-17T23:28:51.284208990Z" level=info msg="StopPodSandbox for \"0505aa1b617d0a50122e35b64b7ef4bfa35ffa96bfb8f767a87a3b97825b771a\"" Apr 17 23:28:51.284778 containerd[1486]: time="2026-04-17T23:28:51.284744630Z" level=info msg="Ensure that sandbox 0505aa1b617d0a50122e35b64b7ef4bfa35ffa96bfb8f767a87a3b97825b771a in task-service has been cleanup successfully" Apr 17 23:28:51.290387 containerd[1486]: time="2026-04-17T23:28:51.290322670Z" level=error msg="encountered an error cleaning up failed sandbox \"87f9d005bc40d4ccb69ca40d4a5906ccd13ade6b13dc2a2a5a3d30d36a11221b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:28:51.290596 containerd[1486]: time="2026-04-17T23:28:51.290389430Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5865bd758-vdph8,Uid:9fdd0055-33f6-4661-b37f-fdeacc1cf39d,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"87f9d005bc40d4ccb69ca40d4a5906ccd13ade6b13dc2a2a5a3d30d36a11221b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:28:51.292042 kubelet[2529]: E0417 23:28:51.290867 2529 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"87f9d005bc40d4ccb69ca40d4a5906ccd13ade6b13dc2a2a5a3d30d36a11221b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:28:51.292042 kubelet[2529]: E0417 23:28:51.290944 2529 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"87f9d005bc40d4ccb69ca40d4a5906ccd13ade6b13dc2a2a5a3d30d36a11221b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-5865bd758-vdph8" Apr 17 23:28:51.292042 kubelet[2529]: E0417 23:28:51.290966 2529 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"87f9d005bc40d4ccb69ca40d4a5906ccd13ade6b13dc2a2a5a3d30d36a11221b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-5865bd758-vdph8" Apr 17 23:28:51.292203 kubelet[2529]: E0417 23:28:51.291015 2529 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5865bd758-vdph8_calico-system(9fdd0055-33f6-4661-b37f-fdeacc1cf39d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5865bd758-vdph8_calico-system(9fdd0055-33f6-4661-b37f-fdeacc1cf39d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"87f9d005bc40d4ccb69ca40d4a5906ccd13ade6b13dc2a2a5a3d30d36a11221b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-5865bd758-vdph8" podUID="9fdd0055-33f6-4661-b37f-fdeacc1cf39d" Apr 17 23:28:51.296034 kubelet[2529]: I0417 23:28:51.295670 2529 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b3080180d8d38ae2028ed71916812eeb03cf94b645207feb24a50cf88f46f1ce" Apr 17 23:28:51.298205 containerd[1486]: time="2026-04-17T23:28:51.298167310Z" level=info msg="StopPodSandbox for \"b3080180d8d38ae2028ed71916812eeb03cf94b645207feb24a50cf88f46f1ce\"" Apr 17 23:28:51.298556 containerd[1486]: time="2026-04-17T23:28:51.298535230Z" level=info msg="Ensure that sandbox b3080180d8d38ae2028ed71916812eeb03cf94b645207feb24a50cf88f46f1ce in task-service has been cleanup successfully" Apr 17 23:28:51.306319 kubelet[2529]: I0417 23:28:51.305771 2529 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9aab4a498f1be21cc6416e812e1a44a9bed0855cec481f73093f0b79f0d4a0f0" Apr 17 23:28:51.307388 containerd[1486]: time="2026-04-17T23:28:51.307352870Z" level=info msg="StopPodSandbox for \"9aab4a498f1be21cc6416e812e1a44a9bed0855cec481f73093f0b79f0d4a0f0\"" Apr 17 23:28:51.307594 containerd[1486]: time="2026-04-17T23:28:51.307573150Z" level=info msg="Ensure that sandbox 9aab4a498f1be21cc6416e812e1a44a9bed0855cec481f73093f0b79f0d4a0f0 in task-service has been cleanup successfully" Apr 17 23:28:51.333626 containerd[1486]: time="2026-04-17T23:28:51.333174830Z" level=info msg="CreateContainer within sandbox \"1b88c1847746ce1f594c14d3c8b139d67bb5c1ef79d0f29ad46188790936a4c0\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"7ffdeb2e6eccaa131994f3c2906c0308f73d891844d5ac9cb366e83602a38d9f\"" Apr 17 23:28:51.335003 containerd[1486]: time="2026-04-17T23:28:51.334966510Z" level=info msg="StartContainer for \"7ffdeb2e6eccaa131994f3c2906c0308f73d891844d5ac9cb366e83602a38d9f\"" Apr 17 23:28:51.353964 containerd[1486]: time="2026-04-17T23:28:51.353676230Z" level=error msg="Failed to destroy network for sandbox \"b10bb5f75dd77044fda6ea9e99473b060ca7c711666de3188966609b987893e0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:28:51.359148 containerd[1486]: time="2026-04-17T23:28:51.359087190Z" level=error msg="encountered an error cleaning up failed sandbox \"b10bb5f75dd77044fda6ea9e99473b060ca7c711666de3188966609b987893e0\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:28:51.359440 containerd[1486]: time="2026-04-17T23:28:51.359371070Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zxqb4,Uid:d8d9fd23-5b21-4171-9475-81c4da566eda,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b10bb5f75dd77044fda6ea9e99473b060ca7c711666de3188966609b987893e0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:28:51.359901 kubelet[2529]: E0417 23:28:51.359787 2529 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b10bb5f75dd77044fda6ea9e99473b060ca7c711666de3188966609b987893e0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:28:51.360238 kubelet[2529]: E0417 23:28:51.359860 2529 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b10bb5f75dd77044fda6ea9e99473b060ca7c711666de3188966609b987893e0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-zxqb4" Apr 17 23:28:51.360238 kubelet[2529]: E0417 23:28:51.360139 2529 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b10bb5f75dd77044fda6ea9e99473b060ca7c711666de3188966609b987893e0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-zxqb4" Apr 17 23:28:51.360789 kubelet[2529]: E0417 23:28:51.360223 2529 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-zxqb4_calico-system(d8d9fd23-5b21-4171-9475-81c4da566eda)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-zxqb4_calico-system(d8d9fd23-5b21-4171-9475-81c4da566eda)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b10bb5f75dd77044fda6ea9e99473b060ca7c711666de3188966609b987893e0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-zxqb4" podUID="d8d9fd23-5b21-4171-9475-81c4da566eda" Apr 17 23:28:51.387108 containerd[1486]: time="2026-04-17T23:28:51.385978030Z" level=error msg="StopPodSandbox for \"0505aa1b617d0a50122e35b64b7ef4bfa35ffa96bfb8f767a87a3b97825b771a\" failed" error="failed to destroy network for sandbox \"0505aa1b617d0a50122e35b64b7ef4bfa35ffa96bfb8f767a87a3b97825b771a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:28:51.387667 kubelet[2529]: E0417 23:28:51.387486 2529 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"0505aa1b617d0a50122e35b64b7ef4bfa35ffa96bfb8f767a87a3b97825b771a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="0505aa1b617d0a50122e35b64b7ef4bfa35ffa96bfb8f767a87a3b97825b771a" Apr 17 23:28:51.387667 kubelet[2529]: E0417 23:28:51.387556 2529 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"0505aa1b617d0a50122e35b64b7ef4bfa35ffa96bfb8f767a87a3b97825b771a"} Apr 17 23:28:51.387667 kubelet[2529]: E0417 23:28:51.387607 2529 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"48917d2d-23fd-499e-a8bf-6a46380b5f6d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0505aa1b617d0a50122e35b64b7ef4bfa35ffa96bfb8f767a87a3b97825b771a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 17 23:28:51.387667 kubelet[2529]: E0417 23:28:51.387635 2529 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"48917d2d-23fd-499e-a8bf-6a46380b5f6d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0505aa1b617d0a50122e35b64b7ef4bfa35ffa96bfb8f767a87a3b97825b771a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6cfcc84fc9-h7qh6" podUID="48917d2d-23fd-499e-a8bf-6a46380b5f6d" Apr 17 23:28:51.390537 containerd[1486]: time="2026-04-17T23:28:51.390483630Z" level=error msg="StopPodSandbox for \"96dea844bc0b8d926120d6c8634e879d57072b5e43e47b04ebf3c1c9792c60bc\" failed" error="failed to destroy network for sandbox \"96dea844bc0b8d926120d6c8634e879d57072b5e43e47b04ebf3c1c9792c60bc\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:28:51.390938 kubelet[2529]: E0417 23:28:51.390761 2529 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"96dea844bc0b8d926120d6c8634e879d57072b5e43e47b04ebf3c1c9792c60bc\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="96dea844bc0b8d926120d6c8634e879d57072b5e43e47b04ebf3c1c9792c60bc" Apr 17 23:28:51.390938 kubelet[2529]: E0417 23:28:51.390822 2529 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"96dea844bc0b8d926120d6c8634e879d57072b5e43e47b04ebf3c1c9792c60bc"} Apr 17 23:28:51.390938 kubelet[2529]: E0417 23:28:51.390857 2529 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"8ec0d86c-9bed-4b8a-bac2-d5b0be9c767e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"96dea844bc0b8d926120d6c8634e879d57072b5e43e47b04ebf3c1c9792c60bc\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 17 23:28:51.390938 kubelet[2529]: E0417 23:28:51.390908 2529 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"8ec0d86c-9bed-4b8a-bac2-d5b0be9c767e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"96dea844bc0b8d926120d6c8634e879d57072b5e43e47b04ebf3c1c9792c60bc\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-cccfbd5cf-kds4v" podUID="8ec0d86c-9bed-4b8a-bac2-d5b0be9c767e" Apr 17 23:28:51.392585 containerd[1486]: time="2026-04-17T23:28:51.392518390Z" level=error msg="StopPodSandbox for \"9aab4a498f1be21cc6416e812e1a44a9bed0855cec481f73093f0b79f0d4a0f0\" failed" error="failed to destroy network for sandbox \"9aab4a498f1be21cc6416e812e1a44a9bed0855cec481f73093f0b79f0d4a0f0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:28:51.392975 kubelet[2529]: E0417 23:28:51.392769 2529 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"9aab4a498f1be21cc6416e812e1a44a9bed0855cec481f73093f0b79f0d4a0f0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="9aab4a498f1be21cc6416e812e1a44a9bed0855cec481f73093f0b79f0d4a0f0" Apr 17 23:28:51.392975 kubelet[2529]: E0417 23:28:51.392859 2529 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"9aab4a498f1be21cc6416e812e1a44a9bed0855cec481f73093f0b79f0d4a0f0"} Apr 17 23:28:51.392975 kubelet[2529]: E0417 23:28:51.392898 2529 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"6916156d-2f5c-4714-9bd0-3f37fd1a0dc3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9aab4a498f1be21cc6416e812e1a44a9bed0855cec481f73093f0b79f0d4a0f0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 17 23:28:51.392975 kubelet[2529]: E0417 23:28:51.392929 2529 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"6916156d-2f5c-4714-9bd0-3f37fd1a0dc3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9aab4a498f1be21cc6416e812e1a44a9bed0855cec481f73093f0b79f0d4a0f0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-hjmd8" podUID="6916156d-2f5c-4714-9bd0-3f37fd1a0dc3" Apr 17 23:28:51.397151 containerd[1486]: time="2026-04-17T23:28:51.397095190Z" level=error msg="StopPodSandbox for \"b3080180d8d38ae2028ed71916812eeb03cf94b645207feb24a50cf88f46f1ce\" failed" error="failed to destroy network for sandbox \"b3080180d8d38ae2028ed71916812eeb03cf94b645207feb24a50cf88f46f1ce\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:28:51.398084 kubelet[2529]: E0417 23:28:51.398036 2529 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b3080180d8d38ae2028ed71916812eeb03cf94b645207feb24a50cf88f46f1ce\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b3080180d8d38ae2028ed71916812eeb03cf94b645207feb24a50cf88f46f1ce" Apr 17 23:28:51.398175 kubelet[2529]: E0417 23:28:51.398097 2529 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b3080180d8d38ae2028ed71916812eeb03cf94b645207feb24a50cf88f46f1ce"} Apr 17 23:28:51.398175 kubelet[2529]: E0417 23:28:51.398134 2529 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d866b1df-e265-46c7-a073-f92742fbb2a8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b3080180d8d38ae2028ed71916812eeb03cf94b645207feb24a50cf88f46f1ce\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 17 23:28:51.398175 kubelet[2529]: E0417 23:28:51.398162 2529 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d866b1df-e265-46c7-a073-f92742fbb2a8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b3080180d8d38ae2028ed71916812eeb03cf94b645207feb24a50cf88f46f1ce\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-tkqp6" podUID="d866b1df-e265-46c7-a073-f92742fbb2a8" Apr 17 23:28:51.408130 systemd[1]: Started cri-containerd-7ffdeb2e6eccaa131994f3c2906c0308f73d891844d5ac9cb366e83602a38d9f.scope - libcontainer container 7ffdeb2e6eccaa131994f3c2906c0308f73d891844d5ac9cb366e83602a38d9f. Apr 17 23:28:51.445002 containerd[1486]: time="2026-04-17T23:28:51.444352710Z" level=info msg="StartContainer for \"7ffdeb2e6eccaa131994f3c2906c0308f73d891844d5ac9cb366e83602a38d9f\" returns successfully" Apr 17 23:28:51.762140 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9aab4a498f1be21cc6416e812e1a44a9bed0855cec481f73093f0b79f0d4a0f0-shm.mount: Deactivated successfully. Apr 17 23:28:52.325936 kubelet[2529]: I0417 23:28:52.323208 2529 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bc473a3d31ecf60c941597f3bb3fada00e6673948a5dc3878e48a11f8fc8f26b" Apr 17 23:28:52.325936 kubelet[2529]: I0417 23:28:52.324819 2529 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="09c200f32e1a0cdf9974950a9011ba1d94c3d018440639feffee8d28e0b2d7f8" Apr 17 23:28:52.326444 containerd[1486]: time="2026-04-17T23:28:52.323361910Z" level=info msg="StopPodSandbox for \"bc473a3d31ecf60c941597f3bb3fada00e6673948a5dc3878e48a11f8fc8f26b\"" Apr 17 23:28:52.326444 containerd[1486]: time="2026-04-17T23:28:52.323672230Z" level=info msg="Ensure that sandbox bc473a3d31ecf60c941597f3bb3fada00e6673948a5dc3878e48a11f8fc8f26b in task-service has been cleanup successfully" Apr 17 23:28:52.326444 containerd[1486]: time="2026-04-17T23:28:52.325351910Z" level=info msg="StopPodSandbox for \"09c200f32e1a0cdf9974950a9011ba1d94c3d018440639feffee8d28e0b2d7f8\"" Apr 17 23:28:52.326444 containerd[1486]: time="2026-04-17T23:28:52.325507190Z" level=info msg="Ensure that sandbox 09c200f32e1a0cdf9974950a9011ba1d94c3d018440639feffee8d28e0b2d7f8 in task-service has been cleanup successfully" Apr 17 23:28:52.335055 kubelet[2529]: I0417 23:28:52.332409 2529 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b10bb5f75dd77044fda6ea9e99473b060ca7c711666de3188966609b987893e0" Apr 17 23:28:52.335176 containerd[1486]: time="2026-04-17T23:28:52.332963390Z" level=info msg="StopPodSandbox for \"b10bb5f75dd77044fda6ea9e99473b060ca7c711666de3188966609b987893e0\"" Apr 17 23:28:52.335176 containerd[1486]: time="2026-04-17T23:28:52.333237950Z" level=info msg="Ensure that sandbox b10bb5f75dd77044fda6ea9e99473b060ca7c711666de3188966609b987893e0 in task-service has been cleanup successfully" Apr 17 23:28:52.345591 kubelet[2529]: I0417 23:28:52.343752 2529 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="87f9d005bc40d4ccb69ca40d4a5906ccd13ade6b13dc2a2a5a3d30d36a11221b" Apr 17 23:28:52.346108 containerd[1486]: time="2026-04-17T23:28:52.346065150Z" level=info msg="StopPodSandbox for \"87f9d005bc40d4ccb69ca40d4a5906ccd13ade6b13dc2a2a5a3d30d36a11221b\"" Apr 17 23:28:52.346260 containerd[1486]: time="2026-04-17T23:28:52.346242870Z" level=info msg="Ensure that sandbox 87f9d005bc40d4ccb69ca40d4a5906ccd13ade6b13dc2a2a5a3d30d36a11221b in task-service has been cleanup successfully" Apr 17 23:28:52.365133 kubelet[2529]: I0417 23:28:52.365065 2529 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-n4vxv" podStartSLOduration=3.22737211 podStartE2EDuration="15.36503671s" podCreationTimestamp="2026-04-17 23:28:37 +0000 UTC" firstStartedPulling="2026-04-17 23:28:37.59729059 +0000 UTC m=+22.669968881" lastFinishedPulling="2026-04-17 23:28:49.73495523 +0000 UTC m=+34.807633481" observedRunningTime="2026-04-17 23:28:52.36015035 +0000 UTC m=+37.432828601" watchObservedRunningTime="2026-04-17 23:28:52.36503671 +0000 UTC m=+37.437715001" Apr 17 23:28:52.585595 containerd[1486]: 2026-04-17 23:28:52.474 [INFO][3772] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="bc473a3d31ecf60c941597f3bb3fada00e6673948a5dc3878e48a11f8fc8f26b" Apr 17 23:28:52.585595 containerd[1486]: 2026-04-17 23:28:52.477 [INFO][3772] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="bc473a3d31ecf60c941597f3bb3fada00e6673948a5dc3878e48a11f8fc8f26b" iface="eth0" netns="/var/run/netns/cni-dd476f5b-fe88-a888-c38e-1c9b4c7592fd" Apr 17 23:28:52.585595 containerd[1486]: 2026-04-17 23:28:52.477 [INFO][3772] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="bc473a3d31ecf60c941597f3bb3fada00e6673948a5dc3878e48a11f8fc8f26b" iface="eth0" netns="/var/run/netns/cni-dd476f5b-fe88-a888-c38e-1c9b4c7592fd" Apr 17 23:28:52.585595 containerd[1486]: 2026-04-17 23:28:52.477 [INFO][3772] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="bc473a3d31ecf60c941597f3bb3fada00e6673948a5dc3878e48a11f8fc8f26b" iface="eth0" netns="/var/run/netns/cni-dd476f5b-fe88-a888-c38e-1c9b4c7592fd" Apr 17 23:28:52.585595 containerd[1486]: 2026-04-17 23:28:52.477 [INFO][3772] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="bc473a3d31ecf60c941597f3bb3fada00e6673948a5dc3878e48a11f8fc8f26b" Apr 17 23:28:52.585595 containerd[1486]: 2026-04-17 23:28:52.477 [INFO][3772] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="bc473a3d31ecf60c941597f3bb3fada00e6673948a5dc3878e48a11f8fc8f26b" Apr 17 23:28:52.585595 containerd[1486]: 2026-04-17 23:28:52.556 [INFO][3796] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="bc473a3d31ecf60c941597f3bb3fada00e6673948a5dc3878e48a11f8fc8f26b" HandleID="k8s-pod-network.bc473a3d31ecf60c941597f3bb3fada00e6673948a5dc3878e48a11f8fc8f26b" Workload="ci--4081--3--6--n--6417c65d59-k8s-whisker--68d6dbbfb6--zsktw-eth0" Apr 17 23:28:52.585595 containerd[1486]: 2026-04-17 23:28:52.556 [INFO][3796] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:28:52.585595 containerd[1486]: 2026-04-17 23:28:52.557 [INFO][3796] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:28:52.585595 containerd[1486]: 2026-04-17 23:28:52.574 [WARNING][3796] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="bc473a3d31ecf60c941597f3bb3fada00e6673948a5dc3878e48a11f8fc8f26b" HandleID="k8s-pod-network.bc473a3d31ecf60c941597f3bb3fada00e6673948a5dc3878e48a11f8fc8f26b" Workload="ci--4081--3--6--n--6417c65d59-k8s-whisker--68d6dbbfb6--zsktw-eth0" Apr 17 23:28:52.585595 containerd[1486]: 2026-04-17 23:28:52.574 [INFO][3796] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="bc473a3d31ecf60c941597f3bb3fada00e6673948a5dc3878e48a11f8fc8f26b" HandleID="k8s-pod-network.bc473a3d31ecf60c941597f3bb3fada00e6673948a5dc3878e48a11f8fc8f26b" Workload="ci--4081--3--6--n--6417c65d59-k8s-whisker--68d6dbbfb6--zsktw-eth0" Apr 17 23:28:52.585595 containerd[1486]: 2026-04-17 23:28:52.575 [INFO][3796] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:28:52.585595 containerd[1486]: 2026-04-17 23:28:52.584 [INFO][3772] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="bc473a3d31ecf60c941597f3bb3fada00e6673948a5dc3878e48a11f8fc8f26b" Apr 17 23:28:52.588454 containerd[1486]: time="2026-04-17T23:28:52.587938630Z" level=info msg="TearDown network for sandbox \"bc473a3d31ecf60c941597f3bb3fada00e6673948a5dc3878e48a11f8fc8f26b\" successfully" Apr 17 23:28:52.588454 containerd[1486]: time="2026-04-17T23:28:52.587983910Z" level=info msg="StopPodSandbox for \"bc473a3d31ecf60c941597f3bb3fada00e6673948a5dc3878e48a11f8fc8f26b\" returns successfully" Apr 17 23:28:52.591910 systemd[1]: run-netns-cni\x2ddd476f5b\x2dfe88\x2da888\x2dc38e\x2d1c9b4c7592fd.mount: Deactivated successfully. Apr 17 23:28:52.611164 containerd[1486]: 2026-04-17 23:28:52.494 [INFO][3750] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="09c200f32e1a0cdf9974950a9011ba1d94c3d018440639feffee8d28e0b2d7f8" Apr 17 23:28:52.611164 containerd[1486]: 2026-04-17 23:28:52.495 [INFO][3750] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="09c200f32e1a0cdf9974950a9011ba1d94c3d018440639feffee8d28e0b2d7f8" iface="eth0" netns="/var/run/netns/cni-08e88170-9e8a-3304-eb5d-0edd904b6df3" Apr 17 23:28:52.611164 containerd[1486]: 2026-04-17 23:28:52.495 [INFO][3750] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="09c200f32e1a0cdf9974950a9011ba1d94c3d018440639feffee8d28e0b2d7f8" iface="eth0" netns="/var/run/netns/cni-08e88170-9e8a-3304-eb5d-0edd904b6df3" Apr 17 23:28:52.611164 containerd[1486]: 2026-04-17 23:28:52.496 [INFO][3750] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="09c200f32e1a0cdf9974950a9011ba1d94c3d018440639feffee8d28e0b2d7f8" iface="eth0" netns="/var/run/netns/cni-08e88170-9e8a-3304-eb5d-0edd904b6df3" Apr 17 23:28:52.611164 containerd[1486]: 2026-04-17 23:28:52.496 [INFO][3750] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="09c200f32e1a0cdf9974950a9011ba1d94c3d018440639feffee8d28e0b2d7f8" Apr 17 23:28:52.611164 containerd[1486]: 2026-04-17 23:28:52.496 [INFO][3750] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="09c200f32e1a0cdf9974950a9011ba1d94c3d018440639feffee8d28e0b2d7f8" Apr 17 23:28:52.611164 containerd[1486]: 2026-04-17 23:28:52.555 [INFO][3811] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="09c200f32e1a0cdf9974950a9011ba1d94c3d018440639feffee8d28e0b2d7f8" HandleID="k8s-pod-network.09c200f32e1a0cdf9974950a9011ba1d94c3d018440639feffee8d28e0b2d7f8" Workload="ci--4081--3--6--n--6417c65d59-k8s-calico--apiserver--5865bd758--tk57g-eth0" Apr 17 23:28:52.611164 containerd[1486]: 2026-04-17 23:28:52.557 [INFO][3811] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:28:52.611164 containerd[1486]: 2026-04-17 23:28:52.575 [INFO][3811] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:28:52.611164 containerd[1486]: 2026-04-17 23:28:52.593 [WARNING][3811] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="09c200f32e1a0cdf9974950a9011ba1d94c3d018440639feffee8d28e0b2d7f8" HandleID="k8s-pod-network.09c200f32e1a0cdf9974950a9011ba1d94c3d018440639feffee8d28e0b2d7f8" Workload="ci--4081--3--6--n--6417c65d59-k8s-calico--apiserver--5865bd758--tk57g-eth0" Apr 17 23:28:52.611164 containerd[1486]: 2026-04-17 23:28:52.593 [INFO][3811] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="09c200f32e1a0cdf9974950a9011ba1d94c3d018440639feffee8d28e0b2d7f8" HandleID="k8s-pod-network.09c200f32e1a0cdf9974950a9011ba1d94c3d018440639feffee8d28e0b2d7f8" Workload="ci--4081--3--6--n--6417c65d59-k8s-calico--apiserver--5865bd758--tk57g-eth0" Apr 17 23:28:52.611164 containerd[1486]: 2026-04-17 23:28:52.596 [INFO][3811] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:28:52.611164 containerd[1486]: 2026-04-17 23:28:52.604 [INFO][3750] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="09c200f32e1a0cdf9974950a9011ba1d94c3d018440639feffee8d28e0b2d7f8" Apr 17 23:28:52.610654 systemd[1]: run-netns-cni\x2d08e88170\x2d9e8a\x2d3304\x2deb5d\x2d0edd904b6df3.mount: Deactivated successfully. Apr 17 23:28:52.612101 containerd[1486]: time="2026-04-17T23:28:52.612058590Z" level=info msg="TearDown network for sandbox \"09c200f32e1a0cdf9974950a9011ba1d94c3d018440639feffee8d28e0b2d7f8\" successfully" Apr 17 23:28:52.612101 containerd[1486]: time="2026-04-17T23:28:52.612093830Z" level=info msg="StopPodSandbox for \"09c200f32e1a0cdf9974950a9011ba1d94c3d018440639feffee8d28e0b2d7f8\" returns successfully" Apr 17 23:28:52.616983 containerd[1486]: time="2026-04-17T23:28:52.616940190Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5865bd758-tk57g,Uid:ce1c3055-4d37-4bc7-b4f9-51d0ad58aa3c,Namespace:calico-system,Attempt:1,}" Apr 17 23:28:52.625694 containerd[1486]: 2026-04-17 23:28:52.477 [INFO][3774] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="87f9d005bc40d4ccb69ca40d4a5906ccd13ade6b13dc2a2a5a3d30d36a11221b" Apr 17 23:28:52.625694 containerd[1486]: 2026-04-17 23:28:52.477 [INFO][3774] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="87f9d005bc40d4ccb69ca40d4a5906ccd13ade6b13dc2a2a5a3d30d36a11221b" iface="eth0" netns="/var/run/netns/cni-9c2cb16e-2bb4-873e-2a3d-df38446683e9" Apr 17 23:28:52.625694 containerd[1486]: 2026-04-17 23:28:52.478 [INFO][3774] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="87f9d005bc40d4ccb69ca40d4a5906ccd13ade6b13dc2a2a5a3d30d36a11221b" iface="eth0" netns="/var/run/netns/cni-9c2cb16e-2bb4-873e-2a3d-df38446683e9" Apr 17 23:28:52.625694 containerd[1486]: 2026-04-17 23:28:52.478 [INFO][3774] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="87f9d005bc40d4ccb69ca40d4a5906ccd13ade6b13dc2a2a5a3d30d36a11221b" iface="eth0" netns="/var/run/netns/cni-9c2cb16e-2bb4-873e-2a3d-df38446683e9" Apr 17 23:28:52.625694 containerd[1486]: 2026-04-17 23:28:52.478 [INFO][3774] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="87f9d005bc40d4ccb69ca40d4a5906ccd13ade6b13dc2a2a5a3d30d36a11221b" Apr 17 23:28:52.625694 containerd[1486]: 2026-04-17 23:28:52.478 [INFO][3774] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="87f9d005bc40d4ccb69ca40d4a5906ccd13ade6b13dc2a2a5a3d30d36a11221b" Apr 17 23:28:52.625694 containerd[1486]: 2026-04-17 23:28:52.558 [INFO][3800] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="87f9d005bc40d4ccb69ca40d4a5906ccd13ade6b13dc2a2a5a3d30d36a11221b" HandleID="k8s-pod-network.87f9d005bc40d4ccb69ca40d4a5906ccd13ade6b13dc2a2a5a3d30d36a11221b" Workload="ci--4081--3--6--n--6417c65d59-k8s-calico--apiserver--5865bd758--vdph8-eth0" Apr 17 23:28:52.625694 containerd[1486]: 2026-04-17 23:28:52.558 [INFO][3800] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:28:52.625694 containerd[1486]: 2026-04-17 23:28:52.597 [INFO][3800] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:28:52.625694 containerd[1486]: 2026-04-17 23:28:52.616 [WARNING][3800] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="87f9d005bc40d4ccb69ca40d4a5906ccd13ade6b13dc2a2a5a3d30d36a11221b" HandleID="k8s-pod-network.87f9d005bc40d4ccb69ca40d4a5906ccd13ade6b13dc2a2a5a3d30d36a11221b" Workload="ci--4081--3--6--n--6417c65d59-k8s-calico--apiserver--5865bd758--vdph8-eth0" Apr 17 23:28:52.625694 containerd[1486]: 2026-04-17 23:28:52.616 [INFO][3800] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="87f9d005bc40d4ccb69ca40d4a5906ccd13ade6b13dc2a2a5a3d30d36a11221b" HandleID="k8s-pod-network.87f9d005bc40d4ccb69ca40d4a5906ccd13ade6b13dc2a2a5a3d30d36a11221b" Workload="ci--4081--3--6--n--6417c65d59-k8s-calico--apiserver--5865bd758--vdph8-eth0" Apr 17 23:28:52.625694 containerd[1486]: 2026-04-17 23:28:52.619 [INFO][3800] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:28:52.625694 containerd[1486]: 2026-04-17 23:28:52.622 [INFO][3774] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="87f9d005bc40d4ccb69ca40d4a5906ccd13ade6b13dc2a2a5a3d30d36a11221b" Apr 17 23:28:52.629469 containerd[1486]: time="2026-04-17T23:28:52.628213590Z" level=info msg="TearDown network for sandbox \"87f9d005bc40d4ccb69ca40d4a5906ccd13ade6b13dc2a2a5a3d30d36a11221b\" successfully" Apr 17 23:28:52.629469 containerd[1486]: time="2026-04-17T23:28:52.628253550Z" level=info msg="StopPodSandbox for \"87f9d005bc40d4ccb69ca40d4a5906ccd13ade6b13dc2a2a5a3d30d36a11221b\" returns successfully" Apr 17 23:28:52.630993 systemd[1]: run-netns-cni\x2d9c2cb16e\x2d2bb4\x2d873e\x2d2a3d\x2ddf38446683e9.mount: Deactivated successfully. Apr 17 23:28:52.633429 containerd[1486]: time="2026-04-17T23:28:52.633388750Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5865bd758-vdph8,Uid:9fdd0055-33f6-4661-b37f-fdeacc1cf39d,Namespace:calico-system,Attempt:1,}" Apr 17 23:28:52.646013 kubelet[2529]: I0417 23:28:52.645980 2529 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcgkv\" (UniqueName: \"kubernetes.io/projected/6bc2b360-2bde-490c-9433-beeae81acf45-kube-api-access-fcgkv\") pod \"6bc2b360-2bde-490c-9433-beeae81acf45\" (UID: \"6bc2b360-2bde-490c-9433-beeae81acf45\") " Apr 17 23:28:52.646531 kubelet[2529]: I0417 23:28:52.646169 2529 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/6bc2b360-2bde-490c-9433-beeae81acf45-nginx-config\") pod \"6bc2b360-2bde-490c-9433-beeae81acf45\" (UID: \"6bc2b360-2bde-490c-9433-beeae81acf45\") " Apr 17 23:28:52.646531 kubelet[2529]: I0417 23:28:52.646228 2529 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6bc2b360-2bde-490c-9433-beeae81acf45-whisker-ca-bundle\") pod \"6bc2b360-2bde-490c-9433-beeae81acf45\" (UID: \"6bc2b360-2bde-490c-9433-beeae81acf45\") " Apr 17 23:28:52.646531 kubelet[2529]: I0417 23:28:52.646255 2529 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/6bc2b360-2bde-490c-9433-beeae81acf45-whisker-backend-key-pair\") pod \"6bc2b360-2bde-490c-9433-beeae81acf45\" (UID: \"6bc2b360-2bde-490c-9433-beeae81acf45\") " Apr 17 23:28:52.649894 kubelet[2529]: I0417 23:28:52.648639 2529 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6bc2b360-2bde-490c-9433-beeae81acf45-nginx-config" (OuterVolumeSpecName: "nginx-config") pod "6bc2b360-2bde-490c-9433-beeae81acf45" (UID: "6bc2b360-2bde-490c-9433-beeae81acf45"). InnerVolumeSpecName "nginx-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 17 23:28:52.649894 kubelet[2529]: I0417 23:28:52.649185 2529 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6bc2b360-2bde-490c-9433-beeae81acf45-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "6bc2b360-2bde-490c-9433-beeae81acf45" (UID: "6bc2b360-2bde-490c-9433-beeae81acf45"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 17 23:28:52.653209 kubelet[2529]: I0417 23:28:52.653163 2529 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6bc2b360-2bde-490c-9433-beeae81acf45-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "6bc2b360-2bde-490c-9433-beeae81acf45" (UID: "6bc2b360-2bde-490c-9433-beeae81acf45"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 17 23:28:52.653308 containerd[1486]: 2026-04-17 23:28:52.477 [INFO][3775] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="b10bb5f75dd77044fda6ea9e99473b060ca7c711666de3188966609b987893e0" Apr 17 23:28:52.653308 containerd[1486]: 2026-04-17 23:28:52.477 [INFO][3775] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b10bb5f75dd77044fda6ea9e99473b060ca7c711666de3188966609b987893e0" iface="eth0" netns="/var/run/netns/cni-73398c68-fbd2-42b7-86fc-7eeb4eaea7cb" Apr 17 23:28:52.653308 containerd[1486]: 2026-04-17 23:28:52.478 [INFO][3775] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b10bb5f75dd77044fda6ea9e99473b060ca7c711666de3188966609b987893e0" iface="eth0" netns="/var/run/netns/cni-73398c68-fbd2-42b7-86fc-7eeb4eaea7cb" Apr 17 23:28:52.653308 containerd[1486]: 2026-04-17 23:28:52.479 [INFO][3775] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="b10bb5f75dd77044fda6ea9e99473b060ca7c711666de3188966609b987893e0" iface="eth0" netns="/var/run/netns/cni-73398c68-fbd2-42b7-86fc-7eeb4eaea7cb" Apr 17 23:28:52.653308 containerd[1486]: 2026-04-17 23:28:52.479 [INFO][3775] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="b10bb5f75dd77044fda6ea9e99473b060ca7c711666de3188966609b987893e0" Apr 17 23:28:52.653308 containerd[1486]: 2026-04-17 23:28:52.479 [INFO][3775] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="b10bb5f75dd77044fda6ea9e99473b060ca7c711666de3188966609b987893e0" Apr 17 23:28:52.653308 containerd[1486]: 2026-04-17 23:28:52.558 [INFO][3799] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="b10bb5f75dd77044fda6ea9e99473b060ca7c711666de3188966609b987893e0" HandleID="k8s-pod-network.b10bb5f75dd77044fda6ea9e99473b060ca7c711666de3188966609b987893e0" Workload="ci--4081--3--6--n--6417c65d59-k8s-csi--node--driver--zxqb4-eth0" Apr 17 23:28:52.653308 containerd[1486]: 2026-04-17 23:28:52.560 [INFO][3799] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:28:52.653308 containerd[1486]: 2026-04-17 23:28:52.619 [INFO][3799] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:28:52.653308 containerd[1486]: 2026-04-17 23:28:52.635 [WARNING][3799] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="b10bb5f75dd77044fda6ea9e99473b060ca7c711666de3188966609b987893e0" HandleID="k8s-pod-network.b10bb5f75dd77044fda6ea9e99473b060ca7c711666de3188966609b987893e0" Workload="ci--4081--3--6--n--6417c65d59-k8s-csi--node--driver--zxqb4-eth0" Apr 17 23:28:52.653308 containerd[1486]: 2026-04-17 23:28:52.635 [INFO][3799] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="b10bb5f75dd77044fda6ea9e99473b060ca7c711666de3188966609b987893e0" HandleID="k8s-pod-network.b10bb5f75dd77044fda6ea9e99473b060ca7c711666de3188966609b987893e0" Workload="ci--4081--3--6--n--6417c65d59-k8s-csi--node--driver--zxqb4-eth0" Apr 17 23:28:52.653308 containerd[1486]: 2026-04-17 23:28:52.640 [INFO][3799] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:28:52.653308 containerd[1486]: 2026-04-17 23:28:52.644 [INFO][3775] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="b10bb5f75dd77044fda6ea9e99473b060ca7c711666de3188966609b987893e0" Apr 17 23:28:52.654105 containerd[1486]: time="2026-04-17T23:28:52.654060870Z" level=info msg="TearDown network for sandbox \"b10bb5f75dd77044fda6ea9e99473b060ca7c711666de3188966609b987893e0\" successfully" Apr 17 23:28:52.654179 containerd[1486]: time="2026-04-17T23:28:52.654107990Z" level=info msg="StopPodSandbox for \"b10bb5f75dd77044fda6ea9e99473b060ca7c711666de3188966609b987893e0\" returns successfully" Apr 17 23:28:52.655556 kubelet[2529]: I0417 23:28:52.655519 2529 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6bc2b360-2bde-490c-9433-beeae81acf45-kube-api-access-fcgkv" (OuterVolumeSpecName: "kube-api-access-fcgkv") pod "6bc2b360-2bde-490c-9433-beeae81acf45" (UID: "6bc2b360-2bde-490c-9433-beeae81acf45"). InnerVolumeSpecName "kube-api-access-fcgkv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 17 23:28:52.657909 containerd[1486]: time="2026-04-17T23:28:52.657868110Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zxqb4,Uid:d8d9fd23-5b21-4171-9475-81c4da566eda,Namespace:calico-system,Attempt:1,}" Apr 17 23:28:52.746564 kubelet[2529]: I0417 23:28:52.746508 2529 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-fcgkv\" (UniqueName: \"kubernetes.io/projected/6bc2b360-2bde-490c-9433-beeae81acf45-kube-api-access-fcgkv\") on node \"ci-4081-3-6-n-6417c65d59\" DevicePath \"\"" Apr 17 23:28:52.746564 kubelet[2529]: I0417 23:28:52.746550 2529 reconciler_common.go:299] "Volume detached for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/6bc2b360-2bde-490c-9433-beeae81acf45-nginx-config\") on node \"ci-4081-3-6-n-6417c65d59\" DevicePath \"\"" Apr 17 23:28:52.746564 kubelet[2529]: I0417 23:28:52.746561 2529 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6bc2b360-2bde-490c-9433-beeae81acf45-whisker-ca-bundle\") on node \"ci-4081-3-6-n-6417c65d59\" DevicePath \"\"" Apr 17 23:28:52.746564 kubelet[2529]: I0417 23:28:52.746570 2529 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/6bc2b360-2bde-490c-9433-beeae81acf45-whisker-backend-key-pair\") on node \"ci-4081-3-6-n-6417c65d59\" DevicePath \"\"" Apr 17 23:28:52.761085 systemd[1]: run-netns-cni\x2d73398c68\x2dfbd2\x2d42b7\x2d86fc\x2d7eeb4eaea7cb.mount: Deactivated successfully. Apr 17 23:28:52.761181 systemd[1]: var-lib-kubelet-pods-6bc2b360\x2d2bde\x2d490c\x2d9433\x2dbeeae81acf45-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dfcgkv.mount: Deactivated successfully. Apr 17 23:28:52.761240 systemd[1]: var-lib-kubelet-pods-6bc2b360\x2d2bde\x2d490c\x2d9433\x2dbeeae81acf45-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Apr 17 23:28:52.868911 systemd-networkd[1386]: cali305ec1431ce: Link UP Apr 17 23:28:52.869135 systemd-networkd[1386]: cali305ec1431ce: Gained carrier Apr 17 23:28:52.893292 containerd[1486]: 2026-04-17 23:28:52.701 [ERROR][3837] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 17 23:28:52.893292 containerd[1486]: 2026-04-17 23:28:52.726 [INFO][3837] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--n--6417c65d59-k8s-calico--apiserver--5865bd758--vdph8-eth0 calico-apiserver-5865bd758- calico-system 9fdd0055-33f6-4661-b37f-fdeacc1cf39d 868 0 2026-04-17 23:28:34 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5865bd758 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-6-n-6417c65d59 calico-apiserver-5865bd758-vdph8 eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali305ec1431ce [] [] }} ContainerID="360208d39cb80038dc2837aa4d703650e327c8b018f5d52ca09011485b01c5ab" Namespace="calico-system" Pod="calico-apiserver-5865bd758-vdph8" WorkloadEndpoint="ci--4081--3--6--n--6417c65d59-k8s-calico--apiserver--5865bd758--vdph8-" Apr 17 23:28:52.893292 containerd[1486]: 2026-04-17 23:28:52.726 [INFO][3837] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="360208d39cb80038dc2837aa4d703650e327c8b018f5d52ca09011485b01c5ab" Namespace="calico-system" Pod="calico-apiserver-5865bd758-vdph8" WorkloadEndpoint="ci--4081--3--6--n--6417c65d59-k8s-calico--apiserver--5865bd758--vdph8-eth0" Apr 17 23:28:52.893292 containerd[1486]: 2026-04-17 23:28:52.779 [INFO][3868] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="360208d39cb80038dc2837aa4d703650e327c8b018f5d52ca09011485b01c5ab" HandleID="k8s-pod-network.360208d39cb80038dc2837aa4d703650e327c8b018f5d52ca09011485b01c5ab" Workload="ci--4081--3--6--n--6417c65d59-k8s-calico--apiserver--5865bd758--vdph8-eth0" Apr 17 23:28:52.893292 containerd[1486]: 2026-04-17 23:28:52.800 [INFO][3868] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="360208d39cb80038dc2837aa4d703650e327c8b018f5d52ca09011485b01c5ab" HandleID="k8s-pod-network.360208d39cb80038dc2837aa4d703650e327c8b018f5d52ca09011485b01c5ab" Workload="ci--4081--3--6--n--6417c65d59-k8s-calico--apiserver--5865bd758--vdph8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002edbd0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-6-n-6417c65d59", "pod":"calico-apiserver-5865bd758-vdph8", "timestamp":"2026-04-17 23:28:52.77932203 +0000 UTC"}, Hostname:"ci-4081-3-6-n-6417c65d59", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x4000247080)} Apr 17 23:28:52.893292 containerd[1486]: 2026-04-17 23:28:52.800 [INFO][3868] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:28:52.893292 containerd[1486]: 2026-04-17 23:28:52.800 [INFO][3868] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:28:52.893292 containerd[1486]: 2026-04-17 23:28:52.800 [INFO][3868] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-n-6417c65d59' Apr 17 23:28:52.893292 containerd[1486]: 2026-04-17 23:28:52.807 [INFO][3868] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.360208d39cb80038dc2837aa4d703650e327c8b018f5d52ca09011485b01c5ab" host="ci-4081-3-6-n-6417c65d59" Apr 17 23:28:52.893292 containerd[1486]: 2026-04-17 23:28:52.813 [INFO][3868] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081-3-6-n-6417c65d59" Apr 17 23:28:52.893292 containerd[1486]: 2026-04-17 23:28:52.820 [INFO][3868] ipam/ipam.go 526: Trying affinity for 192.168.74.192/26 host="ci-4081-3-6-n-6417c65d59" Apr 17 23:28:52.893292 containerd[1486]: 2026-04-17 23:28:52.822 [INFO][3868] ipam/ipam.go 160: Attempting to load block cidr=192.168.74.192/26 host="ci-4081-3-6-n-6417c65d59" Apr 17 23:28:52.893292 containerd[1486]: 2026-04-17 23:28:52.824 [INFO][3868] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.74.192/26 host="ci-4081-3-6-n-6417c65d59" Apr 17 23:28:52.893292 containerd[1486]: 2026-04-17 23:28:52.824 [INFO][3868] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.74.192/26 handle="k8s-pod-network.360208d39cb80038dc2837aa4d703650e327c8b018f5d52ca09011485b01c5ab" host="ci-4081-3-6-n-6417c65d59" Apr 17 23:28:52.893292 containerd[1486]: 2026-04-17 23:28:52.826 [INFO][3868] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.360208d39cb80038dc2837aa4d703650e327c8b018f5d52ca09011485b01c5ab Apr 17 23:28:52.893292 containerd[1486]: 2026-04-17 23:28:52.834 [INFO][3868] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.74.192/26 handle="k8s-pod-network.360208d39cb80038dc2837aa4d703650e327c8b018f5d52ca09011485b01c5ab" host="ci-4081-3-6-n-6417c65d59" Apr 17 23:28:52.893292 containerd[1486]: 2026-04-17 23:28:52.844 [INFO][3868] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.74.193/26] block=192.168.74.192/26 handle="k8s-pod-network.360208d39cb80038dc2837aa4d703650e327c8b018f5d52ca09011485b01c5ab" host="ci-4081-3-6-n-6417c65d59" Apr 17 23:28:52.893292 containerd[1486]: 2026-04-17 23:28:52.844 [INFO][3868] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.74.193/26] handle="k8s-pod-network.360208d39cb80038dc2837aa4d703650e327c8b018f5d52ca09011485b01c5ab" host="ci-4081-3-6-n-6417c65d59" Apr 17 23:28:52.893292 containerd[1486]: 2026-04-17 23:28:52.844 [INFO][3868] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:28:52.893292 containerd[1486]: 2026-04-17 23:28:52.844 [INFO][3868] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.74.193/26] IPv6=[] ContainerID="360208d39cb80038dc2837aa4d703650e327c8b018f5d52ca09011485b01c5ab" HandleID="k8s-pod-network.360208d39cb80038dc2837aa4d703650e327c8b018f5d52ca09011485b01c5ab" Workload="ci--4081--3--6--n--6417c65d59-k8s-calico--apiserver--5865bd758--vdph8-eth0" Apr 17 23:28:52.893853 containerd[1486]: 2026-04-17 23:28:52.847 [INFO][3837] cni-plugin/k8s.go 418: Populated endpoint ContainerID="360208d39cb80038dc2837aa4d703650e327c8b018f5d52ca09011485b01c5ab" Namespace="calico-system" Pod="calico-apiserver-5865bd758-vdph8" WorkloadEndpoint="ci--4081--3--6--n--6417c65d59-k8s-calico--apiserver--5865bd758--vdph8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--6417c65d59-k8s-calico--apiserver--5865bd758--vdph8-eth0", GenerateName:"calico-apiserver-5865bd758-", Namespace:"calico-system", SelfLink:"", UID:"9fdd0055-33f6-4661-b37f-fdeacc1cf39d", ResourceVersion:"868", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 28, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5865bd758", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-6417c65d59", ContainerID:"", Pod:"calico-apiserver-5865bd758-vdph8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.74.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali305ec1431ce", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:28:52.893853 containerd[1486]: 2026-04-17 23:28:52.848 [INFO][3837] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.74.193/32] ContainerID="360208d39cb80038dc2837aa4d703650e327c8b018f5d52ca09011485b01c5ab" Namespace="calico-system" Pod="calico-apiserver-5865bd758-vdph8" WorkloadEndpoint="ci--4081--3--6--n--6417c65d59-k8s-calico--apiserver--5865bd758--vdph8-eth0" Apr 17 23:28:52.893853 containerd[1486]: 2026-04-17 23:28:52.848 [INFO][3837] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali305ec1431ce ContainerID="360208d39cb80038dc2837aa4d703650e327c8b018f5d52ca09011485b01c5ab" Namespace="calico-system" Pod="calico-apiserver-5865bd758-vdph8" WorkloadEndpoint="ci--4081--3--6--n--6417c65d59-k8s-calico--apiserver--5865bd758--vdph8-eth0" Apr 17 23:28:52.893853 containerd[1486]: 2026-04-17 23:28:52.868 [INFO][3837] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="360208d39cb80038dc2837aa4d703650e327c8b018f5d52ca09011485b01c5ab" Namespace="calico-system" Pod="calico-apiserver-5865bd758-vdph8" WorkloadEndpoint="ci--4081--3--6--n--6417c65d59-k8s-calico--apiserver--5865bd758--vdph8-eth0" Apr 17 23:28:52.893853 containerd[1486]: 2026-04-17 23:28:52.874 [INFO][3837] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="360208d39cb80038dc2837aa4d703650e327c8b018f5d52ca09011485b01c5ab" Namespace="calico-system" Pod="calico-apiserver-5865bd758-vdph8" WorkloadEndpoint="ci--4081--3--6--n--6417c65d59-k8s-calico--apiserver--5865bd758--vdph8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--6417c65d59-k8s-calico--apiserver--5865bd758--vdph8-eth0", GenerateName:"calico-apiserver-5865bd758-", Namespace:"calico-system", SelfLink:"", UID:"9fdd0055-33f6-4661-b37f-fdeacc1cf39d", ResourceVersion:"868", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 28, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5865bd758", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-6417c65d59", ContainerID:"360208d39cb80038dc2837aa4d703650e327c8b018f5d52ca09011485b01c5ab", Pod:"calico-apiserver-5865bd758-vdph8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.74.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali305ec1431ce", MAC:"de:1a:44:9a:9a:53", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:28:52.893853 containerd[1486]: 2026-04-17 23:28:52.889 [INFO][3837] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="360208d39cb80038dc2837aa4d703650e327c8b018f5d52ca09011485b01c5ab" Namespace="calico-system" Pod="calico-apiserver-5865bd758-vdph8" WorkloadEndpoint="ci--4081--3--6--n--6417c65d59-k8s-calico--apiserver--5865bd758--vdph8-eth0" Apr 17 23:28:52.921902 containerd[1486]: time="2026-04-17T23:28:52.921702510Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:28:52.922712 containerd[1486]: time="2026-04-17T23:28:52.922450670Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:28:52.923923 containerd[1486]: time="2026-04-17T23:28:52.922503910Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:28:52.923923 containerd[1486]: time="2026-04-17T23:28:52.922606910Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:28:52.971943 systemd-networkd[1386]: cali4930f841eea: Link UP Apr 17 23:28:52.972173 systemd-networkd[1386]: cali4930f841eea: Gained carrier Apr 17 23:28:52.989494 containerd[1486]: 2026-04-17 23:28:52.692 [ERROR][3828] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 17 23:28:52.989494 containerd[1486]: 2026-04-17 23:28:52.715 [INFO][3828] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--n--6417c65d59-k8s-calico--apiserver--5865bd758--tk57g-eth0 calico-apiserver-5865bd758- calico-system ce1c3055-4d37-4bc7-b4f9-51d0ad58aa3c 870 0 2026-04-17 23:28:34 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5865bd758 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-6-n-6417c65d59 calico-apiserver-5865bd758-tk57g eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali4930f841eea [] [] }} ContainerID="1a96af700026b2684941639fa6c450073d714fd05175f82659283f4adda3aa1c" Namespace="calico-system" Pod="calico-apiserver-5865bd758-tk57g" WorkloadEndpoint="ci--4081--3--6--n--6417c65d59-k8s-calico--apiserver--5865bd758--tk57g-" Apr 17 23:28:52.989494 containerd[1486]: 2026-04-17 23:28:52.715 [INFO][3828] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1a96af700026b2684941639fa6c450073d714fd05175f82659283f4adda3aa1c" Namespace="calico-system" Pod="calico-apiserver-5865bd758-tk57g" WorkloadEndpoint="ci--4081--3--6--n--6417c65d59-k8s-calico--apiserver--5865bd758--tk57g-eth0" Apr 17 23:28:52.989494 containerd[1486]: 2026-04-17 23:28:52.788 [INFO][3862] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1a96af700026b2684941639fa6c450073d714fd05175f82659283f4adda3aa1c" HandleID="k8s-pod-network.1a96af700026b2684941639fa6c450073d714fd05175f82659283f4adda3aa1c" Workload="ci--4081--3--6--n--6417c65d59-k8s-calico--apiserver--5865bd758--tk57g-eth0" Apr 17 23:28:52.989494 containerd[1486]: 2026-04-17 23:28:52.804 [INFO][3862] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="1a96af700026b2684941639fa6c450073d714fd05175f82659283f4adda3aa1c" HandleID="k8s-pod-network.1a96af700026b2684941639fa6c450073d714fd05175f82659283f4adda3aa1c" Workload="ci--4081--3--6--n--6417c65d59-k8s-calico--apiserver--5865bd758--tk57g-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002e3470), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-6-n-6417c65d59", "pod":"calico-apiserver-5865bd758-tk57g", "timestamp":"2026-04-17 23:28:52.78888747 +0000 UTC"}, Hostname:"ci-4081-3-6-n-6417c65d59", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40002f0420)} Apr 17 23:28:52.989494 containerd[1486]: 2026-04-17 23:28:52.804 [INFO][3862] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:28:52.989494 containerd[1486]: 2026-04-17 23:28:52.844 [INFO][3862] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:28:52.989494 containerd[1486]: 2026-04-17 23:28:52.845 [INFO][3862] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-n-6417c65d59' Apr 17 23:28:52.989494 containerd[1486]: 2026-04-17 23:28:52.905 [INFO][3862] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.1a96af700026b2684941639fa6c450073d714fd05175f82659283f4adda3aa1c" host="ci-4081-3-6-n-6417c65d59" Apr 17 23:28:52.989494 containerd[1486]: 2026-04-17 23:28:52.915 [INFO][3862] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081-3-6-n-6417c65d59" Apr 17 23:28:52.989494 containerd[1486]: 2026-04-17 23:28:52.924 [INFO][3862] ipam/ipam.go 526: Trying affinity for 192.168.74.192/26 host="ci-4081-3-6-n-6417c65d59" Apr 17 23:28:52.989494 containerd[1486]: 2026-04-17 23:28:52.933 [INFO][3862] ipam/ipam.go 160: Attempting to load block cidr=192.168.74.192/26 host="ci-4081-3-6-n-6417c65d59" Apr 17 23:28:52.989494 containerd[1486]: 2026-04-17 23:28:52.939 [INFO][3862] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.74.192/26 host="ci-4081-3-6-n-6417c65d59" Apr 17 23:28:52.989494 containerd[1486]: 2026-04-17 23:28:52.939 [INFO][3862] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.74.192/26 handle="k8s-pod-network.1a96af700026b2684941639fa6c450073d714fd05175f82659283f4adda3aa1c" host="ci-4081-3-6-n-6417c65d59" Apr 17 23:28:52.989494 containerd[1486]: 2026-04-17 23:28:52.943 [INFO][3862] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.1a96af700026b2684941639fa6c450073d714fd05175f82659283f4adda3aa1c Apr 17 23:28:52.989494 containerd[1486]: 2026-04-17 23:28:52.953 [INFO][3862] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.74.192/26 handle="k8s-pod-network.1a96af700026b2684941639fa6c450073d714fd05175f82659283f4adda3aa1c" host="ci-4081-3-6-n-6417c65d59" Apr 17 23:28:52.989494 containerd[1486]: 2026-04-17 23:28:52.961 [INFO][3862] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.74.194/26] block=192.168.74.192/26 handle="k8s-pod-network.1a96af700026b2684941639fa6c450073d714fd05175f82659283f4adda3aa1c" host="ci-4081-3-6-n-6417c65d59" Apr 17 23:28:52.989494 containerd[1486]: 2026-04-17 23:28:52.962 [INFO][3862] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.74.194/26] handle="k8s-pod-network.1a96af700026b2684941639fa6c450073d714fd05175f82659283f4adda3aa1c" host="ci-4081-3-6-n-6417c65d59" Apr 17 23:28:52.989494 containerd[1486]: 2026-04-17 23:28:52.962 [INFO][3862] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:28:52.989494 containerd[1486]: 2026-04-17 23:28:52.962 [INFO][3862] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.74.194/26] IPv6=[] ContainerID="1a96af700026b2684941639fa6c450073d714fd05175f82659283f4adda3aa1c" HandleID="k8s-pod-network.1a96af700026b2684941639fa6c450073d714fd05175f82659283f4adda3aa1c" Workload="ci--4081--3--6--n--6417c65d59-k8s-calico--apiserver--5865bd758--tk57g-eth0" Apr 17 23:28:52.991155 containerd[1486]: 2026-04-17 23:28:52.966 [INFO][3828] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1a96af700026b2684941639fa6c450073d714fd05175f82659283f4adda3aa1c" Namespace="calico-system" Pod="calico-apiserver-5865bd758-tk57g" WorkloadEndpoint="ci--4081--3--6--n--6417c65d59-k8s-calico--apiserver--5865bd758--tk57g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--6417c65d59-k8s-calico--apiserver--5865bd758--tk57g-eth0", GenerateName:"calico-apiserver-5865bd758-", Namespace:"calico-system", SelfLink:"", UID:"ce1c3055-4d37-4bc7-b4f9-51d0ad58aa3c", ResourceVersion:"870", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 28, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5865bd758", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-6417c65d59", ContainerID:"", Pod:"calico-apiserver-5865bd758-tk57g", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.74.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali4930f841eea", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:28:52.991155 containerd[1486]: 2026-04-17 23:28:52.967 [INFO][3828] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.74.194/32] ContainerID="1a96af700026b2684941639fa6c450073d714fd05175f82659283f4adda3aa1c" Namespace="calico-system" Pod="calico-apiserver-5865bd758-tk57g" WorkloadEndpoint="ci--4081--3--6--n--6417c65d59-k8s-calico--apiserver--5865bd758--tk57g-eth0" Apr 17 23:28:52.991155 containerd[1486]: 2026-04-17 23:28:52.967 [INFO][3828] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4930f841eea ContainerID="1a96af700026b2684941639fa6c450073d714fd05175f82659283f4adda3aa1c" Namespace="calico-system" Pod="calico-apiserver-5865bd758-tk57g" WorkloadEndpoint="ci--4081--3--6--n--6417c65d59-k8s-calico--apiserver--5865bd758--tk57g-eth0" Apr 17 23:28:52.991155 containerd[1486]: 2026-04-17 23:28:52.969 [INFO][3828] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1a96af700026b2684941639fa6c450073d714fd05175f82659283f4adda3aa1c" Namespace="calico-system" Pod="calico-apiserver-5865bd758-tk57g" WorkloadEndpoint="ci--4081--3--6--n--6417c65d59-k8s-calico--apiserver--5865bd758--tk57g-eth0" Apr 17 23:28:52.991155 containerd[1486]: 2026-04-17 23:28:52.969 [INFO][3828] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1a96af700026b2684941639fa6c450073d714fd05175f82659283f4adda3aa1c" Namespace="calico-system" Pod="calico-apiserver-5865bd758-tk57g" WorkloadEndpoint="ci--4081--3--6--n--6417c65d59-k8s-calico--apiserver--5865bd758--tk57g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--6417c65d59-k8s-calico--apiserver--5865bd758--tk57g-eth0", GenerateName:"calico-apiserver-5865bd758-", Namespace:"calico-system", SelfLink:"", UID:"ce1c3055-4d37-4bc7-b4f9-51d0ad58aa3c", ResourceVersion:"870", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 28, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5865bd758", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-6417c65d59", ContainerID:"1a96af700026b2684941639fa6c450073d714fd05175f82659283f4adda3aa1c", Pod:"calico-apiserver-5865bd758-tk57g", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.74.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali4930f841eea", MAC:"36:8f:90:8a:6c:56", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:28:52.991155 containerd[1486]: 2026-04-17 23:28:52.987 [INFO][3828] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1a96af700026b2684941639fa6c450073d714fd05175f82659283f4adda3aa1c" Namespace="calico-system" Pod="calico-apiserver-5865bd758-tk57g" WorkloadEndpoint="ci--4081--3--6--n--6417c65d59-k8s-calico--apiserver--5865bd758--tk57g-eth0" Apr 17 23:28:52.993360 systemd[1]: Started cri-containerd-360208d39cb80038dc2837aa4d703650e327c8b018f5d52ca09011485b01c5ab.scope - libcontainer container 360208d39cb80038dc2837aa4d703650e327c8b018f5d52ca09011485b01c5ab. Apr 17 23:28:53.050751 containerd[1486]: time="2026-04-17T23:28:53.050296670Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:28:53.050751 containerd[1486]: time="2026-04-17T23:28:53.050383150Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:28:53.050751 containerd[1486]: time="2026-04-17T23:28:53.050400070Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:28:53.050751 containerd[1486]: time="2026-04-17T23:28:53.050477790Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:28:53.105938 systemd[1]: Removed slice kubepods-besteffort-pod6bc2b360_2bde_490c_9433_beeae81acf45.slice - libcontainer container kubepods-besteffort-pod6bc2b360_2bde_490c_9433_beeae81acf45.slice. Apr 17 23:28:53.109285 systemd-networkd[1386]: calie8a02487be3: Link UP Apr 17 23:28:53.114498 systemd-networkd[1386]: calie8a02487be3: Gained carrier Apr 17 23:28:53.160239 containerd[1486]: 2026-04-17 23:28:52.719 [ERROR][3848] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 17 23:28:53.160239 containerd[1486]: 2026-04-17 23:28:52.745 [INFO][3848] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--n--6417c65d59-k8s-csi--node--driver--zxqb4-eth0 csi-node-driver- calico-system d8d9fd23-5b21-4171-9475-81c4da566eda 869 0 2026-04-17 23:28:37 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:98cbb5577 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081-3-6-n-6417c65d59 csi-node-driver-zxqb4 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calie8a02487be3 [] [] }} ContainerID="ed0cb06a0ec70ba450e1717e34ff450d7e010308a87d293146e0db712233fda6" Namespace="calico-system" Pod="csi-node-driver-zxqb4" WorkloadEndpoint="ci--4081--3--6--n--6417c65d59-k8s-csi--node--driver--zxqb4-" Apr 17 23:28:53.160239 containerd[1486]: 2026-04-17 23:28:52.745 [INFO][3848] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ed0cb06a0ec70ba450e1717e34ff450d7e010308a87d293146e0db712233fda6" Namespace="calico-system" Pod="csi-node-driver-zxqb4" WorkloadEndpoint="ci--4081--3--6--n--6417c65d59-k8s-csi--node--driver--zxqb4-eth0" Apr 17 23:28:53.160239 containerd[1486]: 2026-04-17 23:28:52.803 [INFO][3873] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ed0cb06a0ec70ba450e1717e34ff450d7e010308a87d293146e0db712233fda6" HandleID="k8s-pod-network.ed0cb06a0ec70ba450e1717e34ff450d7e010308a87d293146e0db712233fda6" Workload="ci--4081--3--6--n--6417c65d59-k8s-csi--node--driver--zxqb4-eth0" Apr 17 23:28:53.160239 containerd[1486]: 2026-04-17 23:28:52.816 [INFO][3873] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="ed0cb06a0ec70ba450e1717e34ff450d7e010308a87d293146e0db712233fda6" HandleID="k8s-pod-network.ed0cb06a0ec70ba450e1717e34ff450d7e010308a87d293146e0db712233fda6" Workload="ci--4081--3--6--n--6417c65d59-k8s-csi--node--driver--zxqb4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002fbed0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-6-n-6417c65d59", "pod":"csi-node-driver-zxqb4", "timestamp":"2026-04-17 23:28:52.80324435 +0000 UTC"}, Hostname:"ci-4081-3-6-n-6417c65d59", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40001866e0)} Apr 17 23:28:53.160239 containerd[1486]: 2026-04-17 23:28:52.817 [INFO][3873] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:28:53.160239 containerd[1486]: 2026-04-17 23:28:52.962 [INFO][3873] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:28:53.160239 containerd[1486]: 2026-04-17 23:28:52.962 [INFO][3873] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-n-6417c65d59' Apr 17 23:28:53.160239 containerd[1486]: 2026-04-17 23:28:53.007 [INFO][3873] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.ed0cb06a0ec70ba450e1717e34ff450d7e010308a87d293146e0db712233fda6" host="ci-4081-3-6-n-6417c65d59" Apr 17 23:28:53.160239 containerd[1486]: 2026-04-17 23:28:53.018 [INFO][3873] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081-3-6-n-6417c65d59" Apr 17 23:28:53.160239 containerd[1486]: 2026-04-17 23:28:53.032 [INFO][3873] ipam/ipam.go 526: Trying affinity for 192.168.74.192/26 host="ci-4081-3-6-n-6417c65d59" Apr 17 23:28:53.160239 containerd[1486]: 2026-04-17 23:28:53.037 [INFO][3873] ipam/ipam.go 160: Attempting to load block cidr=192.168.74.192/26 host="ci-4081-3-6-n-6417c65d59" Apr 17 23:28:53.160239 containerd[1486]: 2026-04-17 23:28:53.041 [INFO][3873] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.74.192/26 host="ci-4081-3-6-n-6417c65d59" Apr 17 23:28:53.160239 containerd[1486]: 2026-04-17 23:28:53.041 [INFO][3873] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.74.192/26 handle="k8s-pod-network.ed0cb06a0ec70ba450e1717e34ff450d7e010308a87d293146e0db712233fda6" host="ci-4081-3-6-n-6417c65d59" Apr 17 23:28:53.160239 containerd[1486]: 2026-04-17 23:28:53.050 [INFO][3873] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.ed0cb06a0ec70ba450e1717e34ff450d7e010308a87d293146e0db712233fda6 Apr 17 23:28:53.160239 containerd[1486]: 2026-04-17 23:28:53.067 [INFO][3873] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.74.192/26 handle="k8s-pod-network.ed0cb06a0ec70ba450e1717e34ff450d7e010308a87d293146e0db712233fda6" host="ci-4081-3-6-n-6417c65d59" Apr 17 23:28:53.160239 containerd[1486]: 2026-04-17 23:28:53.076 [INFO][3873] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.74.195/26] block=192.168.74.192/26 handle="k8s-pod-network.ed0cb06a0ec70ba450e1717e34ff450d7e010308a87d293146e0db712233fda6" host="ci-4081-3-6-n-6417c65d59" Apr 17 23:28:53.160239 containerd[1486]: 2026-04-17 23:28:53.076 [INFO][3873] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.74.195/26] handle="k8s-pod-network.ed0cb06a0ec70ba450e1717e34ff450d7e010308a87d293146e0db712233fda6" host="ci-4081-3-6-n-6417c65d59" Apr 17 23:28:53.160239 containerd[1486]: 2026-04-17 23:28:53.076 [INFO][3873] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:28:53.160239 containerd[1486]: 2026-04-17 23:28:53.076 [INFO][3873] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.74.195/26] IPv6=[] ContainerID="ed0cb06a0ec70ba450e1717e34ff450d7e010308a87d293146e0db712233fda6" HandleID="k8s-pod-network.ed0cb06a0ec70ba450e1717e34ff450d7e010308a87d293146e0db712233fda6" Workload="ci--4081--3--6--n--6417c65d59-k8s-csi--node--driver--zxqb4-eth0" Apr 17 23:28:53.160841 containerd[1486]: 2026-04-17 23:28:53.101 [INFO][3848] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ed0cb06a0ec70ba450e1717e34ff450d7e010308a87d293146e0db712233fda6" Namespace="calico-system" Pod="csi-node-driver-zxqb4" WorkloadEndpoint="ci--4081--3--6--n--6417c65d59-k8s-csi--node--driver--zxqb4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--6417c65d59-k8s-csi--node--driver--zxqb4-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d8d9fd23-5b21-4171-9475-81c4da566eda", ResourceVersion:"869", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 28, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"98cbb5577", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-6417c65d59", ContainerID:"", Pod:"csi-node-driver-zxqb4", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.74.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calie8a02487be3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:28:53.160841 containerd[1486]: 2026-04-17 23:28:53.101 [INFO][3848] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.74.195/32] ContainerID="ed0cb06a0ec70ba450e1717e34ff450d7e010308a87d293146e0db712233fda6" Namespace="calico-system" Pod="csi-node-driver-zxqb4" WorkloadEndpoint="ci--4081--3--6--n--6417c65d59-k8s-csi--node--driver--zxqb4-eth0" Apr 17 23:28:53.160841 containerd[1486]: 2026-04-17 23:28:53.101 [INFO][3848] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie8a02487be3 ContainerID="ed0cb06a0ec70ba450e1717e34ff450d7e010308a87d293146e0db712233fda6" Namespace="calico-system" Pod="csi-node-driver-zxqb4" WorkloadEndpoint="ci--4081--3--6--n--6417c65d59-k8s-csi--node--driver--zxqb4-eth0" Apr 17 23:28:53.160841 containerd[1486]: 2026-04-17 23:28:53.119 [INFO][3848] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ed0cb06a0ec70ba450e1717e34ff450d7e010308a87d293146e0db712233fda6" Namespace="calico-system" Pod="csi-node-driver-zxqb4" WorkloadEndpoint="ci--4081--3--6--n--6417c65d59-k8s-csi--node--driver--zxqb4-eth0" Apr 17 23:28:53.160841 containerd[1486]: 2026-04-17 23:28:53.119 [INFO][3848] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ed0cb06a0ec70ba450e1717e34ff450d7e010308a87d293146e0db712233fda6" Namespace="calico-system" Pod="csi-node-driver-zxqb4" WorkloadEndpoint="ci--4081--3--6--n--6417c65d59-k8s-csi--node--driver--zxqb4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--6417c65d59-k8s-csi--node--driver--zxqb4-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d8d9fd23-5b21-4171-9475-81c4da566eda", ResourceVersion:"869", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 28, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"98cbb5577", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-6417c65d59", ContainerID:"ed0cb06a0ec70ba450e1717e34ff450d7e010308a87d293146e0db712233fda6", Pod:"csi-node-driver-zxqb4", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.74.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calie8a02487be3", MAC:"d6:f6:41:d1:96:2f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:28:53.160841 containerd[1486]: 2026-04-17 23:28:53.151 [INFO][3848] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ed0cb06a0ec70ba450e1717e34ff450d7e010308a87d293146e0db712233fda6" Namespace="calico-system" Pod="csi-node-driver-zxqb4" WorkloadEndpoint="ci--4081--3--6--n--6417c65d59-k8s-csi--node--driver--zxqb4-eth0" Apr 17 23:28:53.169193 systemd[1]: Started cri-containerd-1a96af700026b2684941639fa6c450073d714fd05175f82659283f4adda3aa1c.scope - libcontainer container 1a96af700026b2684941639fa6c450073d714fd05175f82659283f4adda3aa1c. Apr 17 23:28:53.174104 containerd[1486]: time="2026-04-17T23:28:53.174064270Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5865bd758-vdph8,Uid:9fdd0055-33f6-4661-b37f-fdeacc1cf39d,Namespace:calico-system,Attempt:1,} returns sandbox id \"360208d39cb80038dc2837aa4d703650e327c8b018f5d52ca09011485b01c5ab\"" Apr 17 23:28:53.190718 containerd[1486]: time="2026-04-17T23:28:53.190384030Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:28:53.191961 containerd[1486]: time="2026-04-17T23:28:53.190825510Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:28:53.191961 containerd[1486]: time="2026-04-17T23:28:53.190845110Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:28:53.191961 containerd[1486]: time="2026-04-17T23:28:53.190936870Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:28:53.199450 containerd[1486]: time="2026-04-17T23:28:53.199170830Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Apr 17 23:28:53.229072 systemd[1]: Started cri-containerd-ed0cb06a0ec70ba450e1717e34ff450d7e010308a87d293146e0db712233fda6.scope - libcontainer container ed0cb06a0ec70ba450e1717e34ff450d7e010308a87d293146e0db712233fda6. Apr 17 23:28:53.266851 containerd[1486]: time="2026-04-17T23:28:53.266473430Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5865bd758-tk57g,Uid:ce1c3055-4d37-4bc7-b4f9-51d0ad58aa3c,Namespace:calico-system,Attempt:1,} returns sandbox id \"1a96af700026b2684941639fa6c450073d714fd05175f82659283f4adda3aa1c\"" Apr 17 23:28:53.295197 containerd[1486]: time="2026-04-17T23:28:53.294332110Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zxqb4,Uid:d8d9fd23-5b21-4171-9475-81c4da566eda,Namespace:calico-system,Attempt:1,} returns sandbox id \"ed0cb06a0ec70ba450e1717e34ff450d7e010308a87d293146e0db712233fda6\"" Apr 17 23:28:53.355241 kubelet[2529]: I0417 23:28:53.354290 2529 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 17 23:28:53.443333 systemd[1]: Created slice kubepods-besteffort-pode8a75b94_3115_4edf_a396_2e2f3f583471.slice - libcontainer container kubepods-besteffort-pode8a75b94_3115_4edf_a396_2e2f3f583471.slice. Apr 17 23:28:53.453759 kubelet[2529]: I0417 23:28:53.453036 2529 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/e8a75b94-3115-4edf-a396-2e2f3f583471-nginx-config\") pod \"whisker-d5c964959-hqvsp\" (UID: \"e8a75b94-3115-4edf-a396-2e2f3f583471\") " pod="calico-system/whisker-d5c964959-hqvsp" Apr 17 23:28:53.453759 kubelet[2529]: I0417 23:28:53.453118 2529 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/e8a75b94-3115-4edf-a396-2e2f3f583471-whisker-backend-key-pair\") pod \"whisker-d5c964959-hqvsp\" (UID: \"e8a75b94-3115-4edf-a396-2e2f3f583471\") " pod="calico-system/whisker-d5c964959-hqvsp" Apr 17 23:28:53.453759 kubelet[2529]: I0417 23:28:53.453139 2529 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e8a75b94-3115-4edf-a396-2e2f3f583471-whisker-ca-bundle\") pod \"whisker-d5c964959-hqvsp\" (UID: \"e8a75b94-3115-4edf-a396-2e2f3f583471\") " pod="calico-system/whisker-d5c964959-hqvsp" Apr 17 23:28:53.453759 kubelet[2529]: I0417 23:28:53.453243 2529 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l8t4k\" (UniqueName: \"kubernetes.io/projected/e8a75b94-3115-4edf-a396-2e2f3f583471-kube-api-access-l8t4k\") pod \"whisker-d5c964959-hqvsp\" (UID: \"e8a75b94-3115-4edf-a396-2e2f3f583471\") " pod="calico-system/whisker-d5c964959-hqvsp" Apr 17 23:28:53.661936 kernel: calico-node[3937]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Apr 17 23:28:53.763664 containerd[1486]: time="2026-04-17T23:28:53.763046190Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-d5c964959-hqvsp,Uid:e8a75b94-3115-4edf-a396-2e2f3f583471,Namespace:calico-system,Attempt:0,}" Apr 17 23:28:53.967000 systemd-networkd[1386]: calid71faaf9d77: Link UP Apr 17 23:28:53.970989 systemd-networkd[1386]: calid71faaf9d77: Gained carrier Apr 17 23:28:53.989975 containerd[1486]: 2026-04-17 23:28:53.855 [INFO][4163] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--n--6417c65d59-k8s-whisker--d5c964959--hqvsp-eth0 whisker-d5c964959- calico-system e8a75b94-3115-4edf-a396-2e2f3f583471 897 0 2026-04-17 23:28:53 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:d5c964959 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4081-3-6-n-6417c65d59 whisker-d5c964959-hqvsp eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calid71faaf9d77 [] [] }} ContainerID="010a74c7fbdf91022db635fad11791855cb07ae712bc97b98ec072fb5c358ca4" Namespace="calico-system" Pod="whisker-d5c964959-hqvsp" WorkloadEndpoint="ci--4081--3--6--n--6417c65d59-k8s-whisker--d5c964959--hqvsp-" Apr 17 23:28:53.989975 containerd[1486]: 2026-04-17 23:28:53.855 [INFO][4163] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="010a74c7fbdf91022db635fad11791855cb07ae712bc97b98ec072fb5c358ca4" Namespace="calico-system" Pod="whisker-d5c964959-hqvsp" WorkloadEndpoint="ci--4081--3--6--n--6417c65d59-k8s-whisker--d5c964959--hqvsp-eth0" Apr 17 23:28:53.989975 containerd[1486]: 2026-04-17 23:28:53.889 [INFO][4175] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="010a74c7fbdf91022db635fad11791855cb07ae712bc97b98ec072fb5c358ca4" HandleID="k8s-pod-network.010a74c7fbdf91022db635fad11791855cb07ae712bc97b98ec072fb5c358ca4" Workload="ci--4081--3--6--n--6417c65d59-k8s-whisker--d5c964959--hqvsp-eth0" Apr 17 23:28:53.989975 containerd[1486]: 2026-04-17 23:28:53.902 [INFO][4175] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="010a74c7fbdf91022db635fad11791855cb07ae712bc97b98ec072fb5c358ca4" HandleID="k8s-pod-network.010a74c7fbdf91022db635fad11791855cb07ae712bc97b98ec072fb5c358ca4" Workload="ci--4081--3--6--n--6417c65d59-k8s-whisker--d5c964959--hqvsp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002e3e80), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-6-n-6417c65d59", "pod":"whisker-d5c964959-hqvsp", "timestamp":"2026-04-17 23:28:53.88944327 +0000 UTC"}, Hostname:"ci-4081-3-6-n-6417c65d59", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40003f5340)} Apr 17 23:28:53.989975 containerd[1486]: 2026-04-17 23:28:53.902 [INFO][4175] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:28:53.989975 containerd[1486]: 2026-04-17 23:28:53.902 [INFO][4175] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:28:53.989975 containerd[1486]: 2026-04-17 23:28:53.902 [INFO][4175] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-n-6417c65d59' Apr 17 23:28:53.989975 containerd[1486]: 2026-04-17 23:28:53.908 [INFO][4175] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.010a74c7fbdf91022db635fad11791855cb07ae712bc97b98ec072fb5c358ca4" host="ci-4081-3-6-n-6417c65d59" Apr 17 23:28:53.989975 containerd[1486]: 2026-04-17 23:28:53.921 [INFO][4175] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081-3-6-n-6417c65d59" Apr 17 23:28:53.989975 containerd[1486]: 2026-04-17 23:28:53.935 [INFO][4175] ipam/ipam.go 526: Trying affinity for 192.168.74.192/26 host="ci-4081-3-6-n-6417c65d59" Apr 17 23:28:53.989975 containerd[1486]: 2026-04-17 23:28:53.940 [INFO][4175] ipam/ipam.go 160: Attempting to load block cidr=192.168.74.192/26 host="ci-4081-3-6-n-6417c65d59" Apr 17 23:28:53.989975 containerd[1486]: 2026-04-17 23:28:53.943 [INFO][4175] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.74.192/26 host="ci-4081-3-6-n-6417c65d59" Apr 17 23:28:53.989975 containerd[1486]: 2026-04-17 23:28:53.943 [INFO][4175] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.74.192/26 handle="k8s-pod-network.010a74c7fbdf91022db635fad11791855cb07ae712bc97b98ec072fb5c358ca4" host="ci-4081-3-6-n-6417c65d59" Apr 17 23:28:53.989975 containerd[1486]: 2026-04-17 23:28:53.946 [INFO][4175] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.010a74c7fbdf91022db635fad11791855cb07ae712bc97b98ec072fb5c358ca4 Apr 17 23:28:53.989975 containerd[1486]: 2026-04-17 23:28:53.952 [INFO][4175] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.74.192/26 handle="k8s-pod-network.010a74c7fbdf91022db635fad11791855cb07ae712bc97b98ec072fb5c358ca4" host="ci-4081-3-6-n-6417c65d59" Apr 17 23:28:53.989975 containerd[1486]: 2026-04-17 23:28:53.962 [INFO][4175] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.74.196/26] block=192.168.74.192/26 handle="k8s-pod-network.010a74c7fbdf91022db635fad11791855cb07ae712bc97b98ec072fb5c358ca4" host="ci-4081-3-6-n-6417c65d59" Apr 17 23:28:53.989975 containerd[1486]: 2026-04-17 23:28:53.962 [INFO][4175] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.74.196/26] handle="k8s-pod-network.010a74c7fbdf91022db635fad11791855cb07ae712bc97b98ec072fb5c358ca4" host="ci-4081-3-6-n-6417c65d59" Apr 17 23:28:53.989975 containerd[1486]: 2026-04-17 23:28:53.962 [INFO][4175] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:28:53.989975 containerd[1486]: 2026-04-17 23:28:53.962 [INFO][4175] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.74.196/26] IPv6=[] ContainerID="010a74c7fbdf91022db635fad11791855cb07ae712bc97b98ec072fb5c358ca4" HandleID="k8s-pod-network.010a74c7fbdf91022db635fad11791855cb07ae712bc97b98ec072fb5c358ca4" Workload="ci--4081--3--6--n--6417c65d59-k8s-whisker--d5c964959--hqvsp-eth0" Apr 17 23:28:53.990580 containerd[1486]: 2026-04-17 23:28:53.965 [INFO][4163] cni-plugin/k8s.go 418: Populated endpoint ContainerID="010a74c7fbdf91022db635fad11791855cb07ae712bc97b98ec072fb5c358ca4" Namespace="calico-system" Pod="whisker-d5c964959-hqvsp" WorkloadEndpoint="ci--4081--3--6--n--6417c65d59-k8s-whisker--d5c964959--hqvsp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--6417c65d59-k8s-whisker--d5c964959--hqvsp-eth0", GenerateName:"whisker-d5c964959-", Namespace:"calico-system", SelfLink:"", UID:"e8a75b94-3115-4edf-a396-2e2f3f583471", ResourceVersion:"897", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 28, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"d5c964959", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-6417c65d59", ContainerID:"", Pod:"whisker-d5c964959-hqvsp", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.74.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calid71faaf9d77", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:28:53.990580 containerd[1486]: 2026-04-17 23:28:53.965 [INFO][4163] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.74.196/32] ContainerID="010a74c7fbdf91022db635fad11791855cb07ae712bc97b98ec072fb5c358ca4" Namespace="calico-system" Pod="whisker-d5c964959-hqvsp" WorkloadEndpoint="ci--4081--3--6--n--6417c65d59-k8s-whisker--d5c964959--hqvsp-eth0" Apr 17 23:28:53.990580 containerd[1486]: 2026-04-17 23:28:53.965 [INFO][4163] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid71faaf9d77 ContainerID="010a74c7fbdf91022db635fad11791855cb07ae712bc97b98ec072fb5c358ca4" Namespace="calico-system" Pod="whisker-d5c964959-hqvsp" WorkloadEndpoint="ci--4081--3--6--n--6417c65d59-k8s-whisker--d5c964959--hqvsp-eth0" Apr 17 23:28:53.990580 containerd[1486]: 2026-04-17 23:28:53.968 [INFO][4163] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="010a74c7fbdf91022db635fad11791855cb07ae712bc97b98ec072fb5c358ca4" Namespace="calico-system" Pod="whisker-d5c964959-hqvsp" WorkloadEndpoint="ci--4081--3--6--n--6417c65d59-k8s-whisker--d5c964959--hqvsp-eth0" Apr 17 23:28:53.990580 containerd[1486]: 2026-04-17 23:28:53.970 [INFO][4163] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="010a74c7fbdf91022db635fad11791855cb07ae712bc97b98ec072fb5c358ca4" Namespace="calico-system" Pod="whisker-d5c964959-hqvsp" WorkloadEndpoint="ci--4081--3--6--n--6417c65d59-k8s-whisker--d5c964959--hqvsp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--6417c65d59-k8s-whisker--d5c964959--hqvsp-eth0", GenerateName:"whisker-d5c964959-", Namespace:"calico-system", SelfLink:"", UID:"e8a75b94-3115-4edf-a396-2e2f3f583471", ResourceVersion:"897", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 28, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"d5c964959", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-6417c65d59", ContainerID:"010a74c7fbdf91022db635fad11791855cb07ae712bc97b98ec072fb5c358ca4", Pod:"whisker-d5c964959-hqvsp", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.74.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calid71faaf9d77", MAC:"4e:1f:71:9a:ee:17", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:28:53.990580 containerd[1486]: 2026-04-17 23:28:53.985 [INFO][4163] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="010a74c7fbdf91022db635fad11791855cb07ae712bc97b98ec072fb5c358ca4" Namespace="calico-system" Pod="whisker-d5c964959-hqvsp" WorkloadEndpoint="ci--4081--3--6--n--6417c65d59-k8s-whisker--d5c964959--hqvsp-eth0" Apr 17 23:28:54.055826 containerd[1486]: time="2026-04-17T23:28:54.055420710Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:28:54.055826 containerd[1486]: time="2026-04-17T23:28:54.055479870Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:28:54.055826 containerd[1486]: time="2026-04-17T23:28:54.055515230Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:28:54.055826 containerd[1486]: time="2026-04-17T23:28:54.055624150Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:28:54.093156 systemd[1]: Started cri-containerd-010a74c7fbdf91022db635fad11791855cb07ae712bc97b98ec072fb5c358ca4.scope - libcontainer container 010a74c7fbdf91022db635fad11791855cb07ae712bc97b98ec072fb5c358ca4. Apr 17 23:28:54.139003 containerd[1486]: time="2026-04-17T23:28:54.138961870Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-d5c964959-hqvsp,Uid:e8a75b94-3115-4edf-a396-2e2f3f583471,Namespace:calico-system,Attempt:0,} returns sandbox id \"010a74c7fbdf91022db635fad11791855cb07ae712bc97b98ec072fb5c358ca4\"" Apr 17 23:28:54.246203 systemd-networkd[1386]: vxlan.calico: Link UP Apr 17 23:28:54.246212 systemd-networkd[1386]: vxlan.calico: Gained carrier Apr 17 23:28:54.455017 systemd-networkd[1386]: cali305ec1431ce: Gained IPv6LL Apr 17 23:28:54.839177 systemd-networkd[1386]: cali4930f841eea: Gained IPv6LL Apr 17 23:28:54.841156 systemd-networkd[1386]: calie8a02487be3: Gained IPv6LL Apr 17 23:28:55.031896 systemd-networkd[1386]: calid71faaf9d77: Gained IPv6LL Apr 17 23:28:55.070158 kubelet[2529]: I0417 23:28:55.069918 2529 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6bc2b360-2bde-490c-9433-beeae81acf45" path="/var/lib/kubelet/pods/6bc2b360-2bde-490c-9433-beeae81acf45/volumes" Apr 17 23:28:55.889957 containerd[1486]: time="2026-04-17T23:28:55.889891750Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:28:55.892543 containerd[1486]: time="2026-04-17T23:28:55.892316670Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=45552315" Apr 17 23:28:55.892543 containerd[1486]: time="2026-04-17T23:28:55.892451510Z" level=info msg="ImageCreate event name:\"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:28:55.895890 containerd[1486]: time="2026-04-17T23:28:55.895739230Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:28:55.897083 containerd[1486]: time="2026-04-17T23:28:55.896951830Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"46949856\" in 2.69773548s" Apr 17 23:28:55.897083 containerd[1486]: time="2026-04-17T23:28:55.896995350Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\"" Apr 17 23:28:55.898407 containerd[1486]: time="2026-04-17T23:28:55.898247350Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Apr 17 23:28:55.905064 containerd[1486]: time="2026-04-17T23:28:55.905010550Z" level=info msg="CreateContainer within sandbox \"360208d39cb80038dc2837aa4d703650e327c8b018f5d52ca09011485b01c5ab\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 17 23:28:55.920860 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3794261986.mount: Deactivated successfully. Apr 17 23:28:55.926381 containerd[1486]: time="2026-04-17T23:28:55.926323670Z" level=info msg="CreateContainer within sandbox \"360208d39cb80038dc2837aa4d703650e327c8b018f5d52ca09011485b01c5ab\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"e7658a1ad8f67c75b05815e206efdf3e9e9ddcc49029c19b9e320ee37760a8d5\"" Apr 17 23:28:55.927429 systemd-networkd[1386]: vxlan.calico: Gained IPv6LL Apr 17 23:28:55.933041 containerd[1486]: time="2026-04-17T23:28:55.931847470Z" level=info msg="StartContainer for \"e7658a1ad8f67c75b05815e206efdf3e9e9ddcc49029c19b9e320ee37760a8d5\"" Apr 17 23:28:55.982249 systemd[1]: Started cri-containerd-e7658a1ad8f67c75b05815e206efdf3e9e9ddcc49029c19b9e320ee37760a8d5.scope - libcontainer container e7658a1ad8f67c75b05815e206efdf3e9e9ddcc49029c19b9e320ee37760a8d5. Apr 17 23:28:56.035930 containerd[1486]: time="2026-04-17T23:28:56.035848860Z" level=info msg="StartContainer for \"e7658a1ad8f67c75b05815e206efdf3e9e9ddcc49029c19b9e320ee37760a8d5\" returns successfully" Apr 17 23:28:56.303208 containerd[1486]: time="2026-04-17T23:28:56.303089481Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:28:56.305247 containerd[1486]: time="2026-04-17T23:28:56.305195115Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=77" Apr 17 23:28:56.308108 containerd[1486]: time="2026-04-17T23:28:56.308048481Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"46949856\" in 409.762131ms" Apr 17 23:28:56.308108 containerd[1486]: time="2026-04-17T23:28:56.308107082Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\"" Apr 17 23:28:56.310336 containerd[1486]: time="2026-04-17T23:28:56.310300318Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\"" Apr 17 23:28:56.316933 containerd[1486]: time="2026-04-17T23:28:56.316869345Z" level=info msg="CreateContainer within sandbox \"1a96af700026b2684941639fa6c450073d714fd05175f82659283f4adda3aa1c\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 17 23:28:56.336917 containerd[1486]: time="2026-04-17T23:28:56.336835869Z" level=info msg="CreateContainer within sandbox \"1a96af700026b2684941639fa6c450073d714fd05175f82659283f4adda3aa1c\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"3b0442647cb55028850fde744c4768d32ce41350e33f2421477341ec5ece6d9f\"" Apr 17 23:28:56.337794 containerd[1486]: time="2026-04-17T23:28:56.337756244Z" level=info msg="StartContainer for \"3b0442647cb55028850fde744c4768d32ce41350e33f2421477341ec5ece6d9f\"" Apr 17 23:28:56.387187 systemd[1]: Started cri-containerd-3b0442647cb55028850fde744c4768d32ce41350e33f2421477341ec5ece6d9f.scope - libcontainer container 3b0442647cb55028850fde744c4768d32ce41350e33f2421477341ec5ece6d9f. Apr 17 23:28:56.398738 kubelet[2529]: I0417 23:28:56.398168 2529 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-5865bd758-vdph8" podStartSLOduration=19.698662105 podStartE2EDuration="22.398143825s" podCreationTimestamp="2026-04-17 23:28:34 +0000 UTC" firstStartedPulling="2026-04-17 23:28:53.19848359 +0000 UTC m=+38.271161881" lastFinishedPulling="2026-04-17 23:28:55.89796531 +0000 UTC m=+40.970643601" observedRunningTime="2026-04-17 23:28:56.395901388 +0000 UTC m=+41.468579679" watchObservedRunningTime="2026-04-17 23:28:56.398143825 +0000 UTC m=+41.470822116" Apr 17 23:28:56.458239 containerd[1486]: time="2026-04-17T23:28:56.458188520Z" level=info msg="StartContainer for \"3b0442647cb55028850fde744c4768d32ce41350e33f2421477341ec5ece6d9f\" returns successfully" Apr 17 23:28:56.919061 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount785415228.mount: Deactivated successfully. Apr 17 23:28:57.381722 kubelet[2529]: I0417 23:28:57.381106 2529 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 17 23:28:57.888335 containerd[1486]: time="2026-04-17T23:28:57.888255126Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:28:57.891033 containerd[1486]: time="2026-04-17T23:28:57.890958367Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.31.4: active requests=0, bytes read=8261497" Apr 17 23:28:57.893166 containerd[1486]: time="2026-04-17T23:28:57.892977078Z" level=info msg="ImageCreate event name:\"sha256:9cb4086a1b408b52c6b14e0b81520060e1766ee0243508d29d8a53c7b518051f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:28:57.903037 containerd[1486]: time="2026-04-17T23:28:57.902910229Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:28:57.906357 containerd[1486]: time="2026-04-17T23:28:57.903906205Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.31.4\" with image id \"sha256:9cb4086a1b408b52c6b14e0b81520060e1766ee0243508d29d8a53c7b518051f\", repo tag \"ghcr.io/flatcar/calico/csi:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\", size \"9659022\" in 1.592822274s" Apr 17 23:28:57.906556 containerd[1486]: time="2026-04-17T23:28:57.906538445Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\" returns image reference \"sha256:9cb4086a1b408b52c6b14e0b81520060e1766ee0243508d29d8a53c7b518051f\"" Apr 17 23:28:57.910249 containerd[1486]: time="2026-04-17T23:28:57.910209821Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\"" Apr 17 23:28:57.914361 containerd[1486]: time="2026-04-17T23:28:57.914317483Z" level=info msg="CreateContainer within sandbox \"ed0cb06a0ec70ba450e1717e34ff450d7e010308a87d293146e0db712233fda6\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Apr 17 23:28:57.953659 containerd[1486]: time="2026-04-17T23:28:57.953485000Z" level=info msg="CreateContainer within sandbox \"ed0cb06a0ec70ba450e1717e34ff450d7e010308a87d293146e0db712233fda6\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"713a0b0a96500b75806dcc5baa336cb8f217bd88b60da330c8375d12d0185005\"" Apr 17 23:28:57.955210 containerd[1486]: time="2026-04-17T23:28:57.955163505Z" level=info msg="StartContainer for \"713a0b0a96500b75806dcc5baa336cb8f217bd88b60da330c8375d12d0185005\"" Apr 17 23:28:58.002366 systemd[1]: Started cri-containerd-713a0b0a96500b75806dcc5baa336cb8f217bd88b60da330c8375d12d0185005.scope - libcontainer container 713a0b0a96500b75806dcc5baa336cb8f217bd88b60da330c8375d12d0185005. Apr 17 23:28:58.050085 containerd[1486]: time="2026-04-17T23:28:58.049511615Z" level=info msg="StartContainer for \"713a0b0a96500b75806dcc5baa336cb8f217bd88b60da330c8375d12d0185005\" returns successfully" Apr 17 23:28:58.386044 kubelet[2529]: I0417 23:28:58.386015 2529 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 17 23:28:59.650005 containerd[1486]: time="2026-04-17T23:28:59.649038431Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:28:59.650005 containerd[1486]: time="2026-04-17T23:28:59.649911923Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.31.4: active requests=0, bytes read=5882804" Apr 17 23:28:59.651498 containerd[1486]: time="2026-04-17T23:28:59.651429063Z" level=info msg="ImageCreate event name:\"sha256:51af4e9dcdb93e51b26a4a6f99272ec2df8de1aef256bb746f2c7c844b8e7b2c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:28:59.654383 containerd[1486]: time="2026-04-17T23:28:59.654340902Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:28:59.655549 containerd[1486]: time="2026-04-17T23:28:59.655345755Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.31.4\" with image id \"sha256:51af4e9dcdb93e51b26a4a6f99272ec2df8de1aef256bb746f2c7c844b8e7b2c\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\", size \"7280321\" in 1.745090894s" Apr 17 23:28:59.655549 containerd[1486]: time="2026-04-17T23:28:59.655393436Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\" returns image reference \"sha256:51af4e9dcdb93e51b26a4a6f99272ec2df8de1aef256bb746f2c7c844b8e7b2c\"" Apr 17 23:28:59.657690 containerd[1486]: time="2026-04-17T23:28:59.657554585Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\"" Apr 17 23:28:59.665593 containerd[1486]: time="2026-04-17T23:28:59.665448731Z" level=info msg="CreateContainer within sandbox \"010a74c7fbdf91022db635fad11791855cb07ae712bc97b98ec072fb5c358ca4\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Apr 17 23:28:59.687414 containerd[1486]: time="2026-04-17T23:28:59.687367784Z" level=info msg="CreateContainer within sandbox \"010a74c7fbdf91022db635fad11791855cb07ae712bc97b98ec072fb5c358ca4\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"cfefa039f6c9c4c7800cd378486811b49a65392f1c3fb66c4d5cd931aed5ee57\"" Apr 17 23:28:59.688508 containerd[1486]: time="2026-04-17T23:28:59.688477719Z" level=info msg="StartContainer for \"cfefa039f6c9c4c7800cd378486811b49a65392f1c3fb66c4d5cd931aed5ee57\"" Apr 17 23:28:59.739181 systemd[1]: Started cri-containerd-cfefa039f6c9c4c7800cd378486811b49a65392f1c3fb66c4d5cd931aed5ee57.scope - libcontainer container cfefa039f6c9c4c7800cd378486811b49a65392f1c3fb66c4d5cd931aed5ee57. Apr 17 23:28:59.781707 containerd[1486]: time="2026-04-17T23:28:59.781574245Z" level=info msg="StartContainer for \"cfefa039f6c9c4c7800cd378486811b49a65392f1c3fb66c4d5cd931aed5ee57\" returns successfully" Apr 17 23:29:00.470142 kubelet[2529]: I0417 23:29:00.469804 2529 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 17 23:29:00.617004 kubelet[2529]: I0417 23:29:00.615643 2529 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-5865bd758-tk57g" podStartSLOduration=23.576898344 podStartE2EDuration="26.615626134s" podCreationTimestamp="2026-04-17 23:28:34 +0000 UTC" firstStartedPulling="2026-04-17 23:28:53.27049027 +0000 UTC m=+38.343168561" lastFinishedPulling="2026-04-17 23:28:56.30921806 +0000 UTC m=+41.381896351" observedRunningTime="2026-04-17 23:28:57.405017248 +0000 UTC m=+42.477695539" watchObservedRunningTime="2026-04-17 23:29:00.615626134 +0000 UTC m=+45.688304425" Apr 17 23:29:00.679783 systemd[1]: run-containerd-runc-k8s.io-7ffdeb2e6eccaa131994f3c2906c0308f73d891844d5ac9cb366e83602a38d9f-runc.wSU1eN.mount: Deactivated successfully. Apr 17 23:29:01.308592 containerd[1486]: time="2026-04-17T23:29:01.308538426Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:29:01.310612 containerd[1486]: time="2026-04-17T23:29:01.310563490Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4: active requests=0, bytes read=13766291" Apr 17 23:29:01.311887 containerd[1486]: time="2026-04-17T23:29:01.311784985Z" level=info msg="ImageCreate event name:\"sha256:8195c49a3b504e7ef58a8fc9a0e9ae66ae6ae90ef4998c04591be9588e8fa07e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:29:01.316223 containerd[1486]: time="2026-04-17T23:29:01.316130556Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:29:01.317102 containerd[1486]: time="2026-04-17T23:29:01.317048926Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" with image id \"sha256:8195c49a3b504e7ef58a8fc9a0e9ae66ae6ae90ef4998c04591be9588e8fa07e\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\", size \"15163768\" in 1.659445461s" Apr 17 23:29:01.317102 containerd[1486]: time="2026-04-17T23:29:01.317088807Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" returns image reference \"sha256:8195c49a3b504e7ef58a8fc9a0e9ae66ae6ae90ef4998c04591be9588e8fa07e\"" Apr 17 23:29:01.319312 containerd[1486]: time="2026-04-17T23:29:01.319256552Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\"" Apr 17 23:29:01.326016 containerd[1486]: time="2026-04-17T23:29:01.325959351Z" level=info msg="CreateContainer within sandbox \"ed0cb06a0ec70ba450e1717e34ff450d7e010308a87d293146e0db712233fda6\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Apr 17 23:29:01.342894 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount320667144.mount: Deactivated successfully. Apr 17 23:29:01.347136 containerd[1486]: time="2026-04-17T23:29:01.347074640Z" level=info msg="CreateContainer within sandbox \"ed0cb06a0ec70ba450e1717e34ff450d7e010308a87d293146e0db712233fda6\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"08143cf7ebd9b7135a2f0a7c13c3a975595fd4ac15bd803f157136ae1ce47c48\"" Apr 17 23:29:01.348157 containerd[1486]: time="2026-04-17T23:29:01.348017691Z" level=info msg="StartContainer for \"08143cf7ebd9b7135a2f0a7c13c3a975595fd4ac15bd803f157136ae1ce47c48\"" Apr 17 23:29:01.398563 systemd[1]: Started cri-containerd-08143cf7ebd9b7135a2f0a7c13c3a975595fd4ac15bd803f157136ae1ce47c48.scope - libcontainer container 08143cf7ebd9b7135a2f0a7c13c3a975595fd4ac15bd803f157136ae1ce47c48. Apr 17 23:29:01.435264 containerd[1486]: time="2026-04-17T23:29:01.435183196Z" level=info msg="StartContainer for \"08143cf7ebd9b7135a2f0a7c13c3a975595fd4ac15bd803f157136ae1ce47c48\" returns successfully" Apr 17 23:29:02.157536 kubelet[2529]: I0417 23:29:02.157458 2529 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Apr 17 23:29:02.157536 kubelet[2529]: I0417 23:29:02.157520 2529 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Apr 17 23:29:03.098116 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount245205403.mount: Deactivated successfully. Apr 17 23:29:03.119086 containerd[1486]: time="2026-04-17T23:29:03.117560124Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:29:03.121066 containerd[1486]: time="2026-04-17T23:29:03.121024439Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.31.4: active requests=0, bytes read=16426594" Apr 17 23:29:03.122865 containerd[1486]: time="2026-04-17T23:29:03.122484854Z" level=info msg="ImageCreate event name:\"sha256:19fab8e13a4d97732973f299576e43f89b889ceff6e3768f711f30e6ace1c662\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:29:03.128464 containerd[1486]: time="2026-04-17T23:29:03.128421676Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:29:03.131638 containerd[1486]: time="2026-04-17T23:29:03.131496988Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" with image id \"sha256:19fab8e13a4d97732973f299576e43f89b889ceff6e3768f711f30e6ace1c662\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\", size \"16426424\" in 1.812194755s" Apr 17 23:29:03.132354 containerd[1486]: time="2026-04-17T23:29:03.131701470Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" returns image reference \"sha256:19fab8e13a4d97732973f299576e43f89b889ceff6e3768f711f30e6ace1c662\"" Apr 17 23:29:03.136674 containerd[1486]: time="2026-04-17T23:29:03.136638761Z" level=info msg="CreateContainer within sandbox \"010a74c7fbdf91022db635fad11791855cb07ae712bc97b98ec072fb5c358ca4\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Apr 17 23:29:03.172936 containerd[1486]: time="2026-04-17T23:29:03.172372170Z" level=info msg="CreateContainer within sandbox \"010a74c7fbdf91022db635fad11791855cb07ae712bc97b98ec072fb5c358ca4\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"3f5a040f03ded14379d14e99ec1564400207a243abb57b340f88057168f1637c\"" Apr 17 23:29:03.175215 containerd[1486]: time="2026-04-17T23:29:03.175180079Z" level=info msg="StartContainer for \"3f5a040f03ded14379d14e99ec1564400207a243abb57b340f88057168f1637c\"" Apr 17 23:29:03.219681 systemd[1]: Started cri-containerd-3f5a040f03ded14379d14e99ec1564400207a243abb57b340f88057168f1637c.scope - libcontainer container 3f5a040f03ded14379d14e99ec1564400207a243abb57b340f88057168f1637c. Apr 17 23:29:03.287798 containerd[1486]: time="2026-04-17T23:29:03.287685162Z" level=info msg="StartContainer for \"3f5a040f03ded14379d14e99ec1564400207a243abb57b340f88057168f1637c\" returns successfully" Apr 17 23:29:03.437051 kubelet[2529]: I0417 23:29:03.436147 2529 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-zxqb4" podStartSLOduration=18.415093703 podStartE2EDuration="26.436028336s" podCreationTimestamp="2026-04-17 23:28:37 +0000 UTC" firstStartedPulling="2026-04-17 23:28:53.29753755 +0000 UTC m=+38.370215841" lastFinishedPulling="2026-04-17 23:29:01.318472183 +0000 UTC m=+46.391150474" observedRunningTime="2026-04-17 23:29:02.434443151 +0000 UTC m=+47.507121402" watchObservedRunningTime="2026-04-17 23:29:03.436028336 +0000 UTC m=+48.508706627" Apr 17 23:29:04.066578 containerd[1486]: time="2026-04-17T23:29:04.066176328Z" level=info msg="StopPodSandbox for \"9aab4a498f1be21cc6416e812e1a44a9bed0855cec481f73093f0b79f0d4a0f0\"" Apr 17 23:29:04.067395 containerd[1486]: time="2026-04-17T23:29:04.066840855Z" level=info msg="StopPodSandbox for \"b3080180d8d38ae2028ed71916812eeb03cf94b645207feb24a50cf88f46f1ce\"" Apr 17 23:29:04.156599 kubelet[2529]: I0417 23:29:04.156515 2529 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-d5c964959-hqvsp" podStartSLOduration=2.164474912 podStartE2EDuration="11.156496444s" podCreationTimestamp="2026-04-17 23:28:53 +0000 UTC" firstStartedPulling="2026-04-17 23:28:54.14083559 +0000 UTC m=+39.213513881" lastFinishedPulling="2026-04-17 23:29:03.132857122 +0000 UTC m=+48.205535413" observedRunningTime="2026-04-17 23:29:03.437258109 +0000 UTC m=+48.509936360" watchObservedRunningTime="2026-04-17 23:29:04.156496444 +0000 UTC m=+49.229174735" Apr 17 23:29:04.232026 containerd[1486]: 2026-04-17 23:29:04.157 [INFO][4658] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="9aab4a498f1be21cc6416e812e1a44a9bed0855cec481f73093f0b79f0d4a0f0" Apr 17 23:29:04.232026 containerd[1486]: 2026-04-17 23:29:04.158 [INFO][4658] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="9aab4a498f1be21cc6416e812e1a44a9bed0855cec481f73093f0b79f0d4a0f0" iface="eth0" netns="/var/run/netns/cni-4e8cec1a-56c8-2b2d-bd16-f9e1f9d3cbdc" Apr 17 23:29:04.232026 containerd[1486]: 2026-04-17 23:29:04.159 [INFO][4658] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="9aab4a498f1be21cc6416e812e1a44a9bed0855cec481f73093f0b79f0d4a0f0" iface="eth0" netns="/var/run/netns/cni-4e8cec1a-56c8-2b2d-bd16-f9e1f9d3cbdc" Apr 17 23:29:04.232026 containerd[1486]: 2026-04-17 23:29:04.159 [INFO][4658] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="9aab4a498f1be21cc6416e812e1a44a9bed0855cec481f73093f0b79f0d4a0f0" iface="eth0" netns="/var/run/netns/cni-4e8cec1a-56c8-2b2d-bd16-f9e1f9d3cbdc" Apr 17 23:29:04.232026 containerd[1486]: 2026-04-17 23:29:04.159 [INFO][4658] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="9aab4a498f1be21cc6416e812e1a44a9bed0855cec481f73093f0b79f0d4a0f0" Apr 17 23:29:04.232026 containerd[1486]: 2026-04-17 23:29:04.159 [INFO][4658] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="9aab4a498f1be21cc6416e812e1a44a9bed0855cec481f73093f0b79f0d4a0f0" Apr 17 23:29:04.232026 containerd[1486]: 2026-04-17 23:29:04.208 [INFO][4673] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="9aab4a498f1be21cc6416e812e1a44a9bed0855cec481f73093f0b79f0d4a0f0" HandleID="k8s-pod-network.9aab4a498f1be21cc6416e812e1a44a9bed0855cec481f73093f0b79f0d4a0f0" Workload="ci--4081--3--6--n--6417c65d59-k8s-coredns--66bc5c9577--hjmd8-eth0" Apr 17 23:29:04.232026 containerd[1486]: 2026-04-17 23:29:04.209 [INFO][4673] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:29:04.232026 containerd[1486]: 2026-04-17 23:29:04.209 [INFO][4673] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:29:04.232026 containerd[1486]: 2026-04-17 23:29:04.224 [WARNING][4673] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="9aab4a498f1be21cc6416e812e1a44a9bed0855cec481f73093f0b79f0d4a0f0" HandleID="k8s-pod-network.9aab4a498f1be21cc6416e812e1a44a9bed0855cec481f73093f0b79f0d4a0f0" Workload="ci--4081--3--6--n--6417c65d59-k8s-coredns--66bc5c9577--hjmd8-eth0" Apr 17 23:29:04.232026 containerd[1486]: 2026-04-17 23:29:04.224 [INFO][4673] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="9aab4a498f1be21cc6416e812e1a44a9bed0855cec481f73093f0b79f0d4a0f0" HandleID="k8s-pod-network.9aab4a498f1be21cc6416e812e1a44a9bed0855cec481f73093f0b79f0d4a0f0" Workload="ci--4081--3--6--n--6417c65d59-k8s-coredns--66bc5c9577--hjmd8-eth0" Apr 17 23:29:04.232026 containerd[1486]: 2026-04-17 23:29:04.226 [INFO][4673] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:29:04.232026 containerd[1486]: 2026-04-17 23:29:04.229 [INFO][4658] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="9aab4a498f1be21cc6416e812e1a44a9bed0855cec481f73093f0b79f0d4a0f0" Apr 17 23:29:04.238713 systemd[1]: run-netns-cni\x2d4e8cec1a\x2d56c8\x2d2b2d\x2dbd16\x2df9e1f9d3cbdc.mount: Deactivated successfully. Apr 17 23:29:04.239438 containerd[1486]: time="2026-04-17T23:29:04.239369647Z" level=info msg="TearDown network for sandbox \"9aab4a498f1be21cc6416e812e1a44a9bed0855cec481f73093f0b79f0d4a0f0\" successfully" Apr 17 23:29:04.239592 containerd[1486]: time="2026-04-17T23:29:04.239523488Z" level=info msg="StopPodSandbox for \"9aab4a498f1be21cc6416e812e1a44a9bed0855cec481f73093f0b79f0d4a0f0\" returns successfully" Apr 17 23:29:04.244292 containerd[1486]: time="2026-04-17T23:29:04.244250574Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-hjmd8,Uid:6916156d-2f5c-4714-9bd0-3f37fd1a0dc3,Namespace:kube-system,Attempt:1,}" Apr 17 23:29:04.266961 containerd[1486]: 2026-04-17 23:29:04.161 [INFO][4659] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="b3080180d8d38ae2028ed71916812eeb03cf94b645207feb24a50cf88f46f1ce" Apr 17 23:29:04.266961 containerd[1486]: 2026-04-17 23:29:04.161 [INFO][4659] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b3080180d8d38ae2028ed71916812eeb03cf94b645207feb24a50cf88f46f1ce" iface="eth0" netns="/var/run/netns/cni-976de186-832f-30ae-7917-3a37c50094de" Apr 17 23:29:04.266961 containerd[1486]: 2026-04-17 23:29:04.162 [INFO][4659] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b3080180d8d38ae2028ed71916812eeb03cf94b645207feb24a50cf88f46f1ce" iface="eth0" netns="/var/run/netns/cni-976de186-832f-30ae-7917-3a37c50094de" Apr 17 23:29:04.266961 containerd[1486]: 2026-04-17 23:29:04.163 [INFO][4659] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="b3080180d8d38ae2028ed71916812eeb03cf94b645207feb24a50cf88f46f1ce" iface="eth0" netns="/var/run/netns/cni-976de186-832f-30ae-7917-3a37c50094de" Apr 17 23:29:04.266961 containerd[1486]: 2026-04-17 23:29:04.163 [INFO][4659] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="b3080180d8d38ae2028ed71916812eeb03cf94b645207feb24a50cf88f46f1ce" Apr 17 23:29:04.266961 containerd[1486]: 2026-04-17 23:29:04.163 [INFO][4659] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="b3080180d8d38ae2028ed71916812eeb03cf94b645207feb24a50cf88f46f1ce" Apr 17 23:29:04.266961 containerd[1486]: 2026-04-17 23:29:04.216 [INFO][4675] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="b3080180d8d38ae2028ed71916812eeb03cf94b645207feb24a50cf88f46f1ce" HandleID="k8s-pod-network.b3080180d8d38ae2028ed71916812eeb03cf94b645207feb24a50cf88f46f1ce" Workload="ci--4081--3--6--n--6417c65d59-k8s-coredns--66bc5c9577--tkqp6-eth0" Apr 17 23:29:04.266961 containerd[1486]: 2026-04-17 23:29:04.216 [INFO][4675] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:29:04.266961 containerd[1486]: 2026-04-17 23:29:04.226 [INFO][4675] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:29:04.266961 containerd[1486]: 2026-04-17 23:29:04.251 [WARNING][4675] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="b3080180d8d38ae2028ed71916812eeb03cf94b645207feb24a50cf88f46f1ce" HandleID="k8s-pod-network.b3080180d8d38ae2028ed71916812eeb03cf94b645207feb24a50cf88f46f1ce" Workload="ci--4081--3--6--n--6417c65d59-k8s-coredns--66bc5c9577--tkqp6-eth0" Apr 17 23:29:04.266961 containerd[1486]: 2026-04-17 23:29:04.251 [INFO][4675] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="b3080180d8d38ae2028ed71916812eeb03cf94b645207feb24a50cf88f46f1ce" HandleID="k8s-pod-network.b3080180d8d38ae2028ed71916812eeb03cf94b645207feb24a50cf88f46f1ce" Workload="ci--4081--3--6--n--6417c65d59-k8s-coredns--66bc5c9577--tkqp6-eth0" Apr 17 23:29:04.266961 containerd[1486]: 2026-04-17 23:29:04.254 [INFO][4675] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:29:04.266961 containerd[1486]: 2026-04-17 23:29:04.260 [INFO][4659] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="b3080180d8d38ae2028ed71916812eeb03cf94b645207feb24a50cf88f46f1ce" Apr 17 23:29:04.274097 containerd[1486]: time="2026-04-17T23:29:04.273939902Z" level=info msg="TearDown network for sandbox \"b3080180d8d38ae2028ed71916812eeb03cf94b645207feb24a50cf88f46f1ce\" successfully" Apr 17 23:29:04.274097 containerd[1486]: time="2026-04-17T23:29:04.273987302Z" level=info msg="StopPodSandbox for \"b3080180d8d38ae2028ed71916812eeb03cf94b645207feb24a50cf88f46f1ce\" returns successfully" Apr 17 23:29:04.278916 containerd[1486]: time="2026-04-17T23:29:04.278059182Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-tkqp6,Uid:d866b1df-e265-46c7-a073-f92742fbb2a8,Namespace:kube-system,Attempt:1,}" Apr 17 23:29:04.289457 systemd[1]: run-netns-cni\x2d976de186\x2d832f\x2d30ae\x2d7917\x2d3a37c50094de.mount: Deactivated successfully. Apr 17 23:29:04.535396 systemd-networkd[1386]: cali3718be877bb: Link UP Apr 17 23:29:04.537960 systemd-networkd[1386]: cali3718be877bb: Gained carrier Apr 17 23:29:04.581101 containerd[1486]: 2026-04-17 23:29:04.351 [INFO][4686] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--n--6417c65d59-k8s-coredns--66bc5c9577--hjmd8-eth0 coredns-66bc5c9577- kube-system 6916156d-2f5c-4714-9bd0-3f37fd1a0dc3 967 0 2026-04-17 23:28:22 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-3-6-n-6417c65d59 coredns-66bc5c9577-hjmd8 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali3718be877bb [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="2bcc4cbec109180a705c13ad20bb586c6b67af9a7e508006b5e467b99f948afd" Namespace="kube-system" Pod="coredns-66bc5c9577-hjmd8" WorkloadEndpoint="ci--4081--3--6--n--6417c65d59-k8s-coredns--66bc5c9577--hjmd8-" Apr 17 23:29:04.581101 containerd[1486]: 2026-04-17 23:29:04.352 [INFO][4686] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2bcc4cbec109180a705c13ad20bb586c6b67af9a7e508006b5e467b99f948afd" Namespace="kube-system" Pod="coredns-66bc5c9577-hjmd8" WorkloadEndpoint="ci--4081--3--6--n--6417c65d59-k8s-coredns--66bc5c9577--hjmd8-eth0" Apr 17 23:29:04.581101 containerd[1486]: 2026-04-17 23:29:04.429 [INFO][4707] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2bcc4cbec109180a705c13ad20bb586c6b67af9a7e508006b5e467b99f948afd" HandleID="k8s-pod-network.2bcc4cbec109180a705c13ad20bb586c6b67af9a7e508006b5e467b99f948afd" Workload="ci--4081--3--6--n--6417c65d59-k8s-coredns--66bc5c9577--hjmd8-eth0" Apr 17 23:29:04.581101 containerd[1486]: 2026-04-17 23:29:04.452 [INFO][4707] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="2bcc4cbec109180a705c13ad20bb586c6b67af9a7e508006b5e467b99f948afd" HandleID="k8s-pod-network.2bcc4cbec109180a705c13ad20bb586c6b67af9a7e508006b5e467b99f948afd" Workload="ci--4081--3--6--n--6417c65d59-k8s-coredns--66bc5c9577--hjmd8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003af4a0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-3-6-n-6417c65d59", "pod":"coredns-66bc5c9577-hjmd8", "timestamp":"2026-04-17 23:29:04.429370528 +0000 UTC"}, Hostname:"ci-4081-3-6-n-6417c65d59", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40003b4580)} Apr 17 23:29:04.581101 containerd[1486]: 2026-04-17 23:29:04.452 [INFO][4707] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:29:04.581101 containerd[1486]: 2026-04-17 23:29:04.453 [INFO][4707] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:29:04.581101 containerd[1486]: 2026-04-17 23:29:04.453 [INFO][4707] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-n-6417c65d59' Apr 17 23:29:04.581101 containerd[1486]: 2026-04-17 23:29:04.461 [INFO][4707] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.2bcc4cbec109180a705c13ad20bb586c6b67af9a7e508006b5e467b99f948afd" host="ci-4081-3-6-n-6417c65d59" Apr 17 23:29:04.581101 containerd[1486]: 2026-04-17 23:29:04.479 [INFO][4707] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081-3-6-n-6417c65d59" Apr 17 23:29:04.581101 containerd[1486]: 2026-04-17 23:29:04.490 [INFO][4707] ipam/ipam.go 526: Trying affinity for 192.168.74.192/26 host="ci-4081-3-6-n-6417c65d59" Apr 17 23:29:04.581101 containerd[1486]: 2026-04-17 23:29:04.494 [INFO][4707] ipam/ipam.go 160: Attempting to load block cidr=192.168.74.192/26 host="ci-4081-3-6-n-6417c65d59" Apr 17 23:29:04.581101 containerd[1486]: 2026-04-17 23:29:04.500 [INFO][4707] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.74.192/26 host="ci-4081-3-6-n-6417c65d59" Apr 17 23:29:04.581101 containerd[1486]: 2026-04-17 23:29:04.500 [INFO][4707] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.74.192/26 handle="k8s-pod-network.2bcc4cbec109180a705c13ad20bb586c6b67af9a7e508006b5e467b99f948afd" host="ci-4081-3-6-n-6417c65d59" Apr 17 23:29:04.581101 containerd[1486]: 2026-04-17 23:29:04.503 [INFO][4707] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.2bcc4cbec109180a705c13ad20bb586c6b67af9a7e508006b5e467b99f948afd Apr 17 23:29:04.581101 containerd[1486]: 2026-04-17 23:29:04.510 [INFO][4707] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.74.192/26 handle="k8s-pod-network.2bcc4cbec109180a705c13ad20bb586c6b67af9a7e508006b5e467b99f948afd" host="ci-4081-3-6-n-6417c65d59" Apr 17 23:29:04.581101 containerd[1486]: 2026-04-17 23:29:04.523 [INFO][4707] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.74.197/26] block=192.168.74.192/26 handle="k8s-pod-network.2bcc4cbec109180a705c13ad20bb586c6b67af9a7e508006b5e467b99f948afd" host="ci-4081-3-6-n-6417c65d59" Apr 17 23:29:04.581101 containerd[1486]: 2026-04-17 23:29:04.523 [INFO][4707] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.74.197/26] handle="k8s-pod-network.2bcc4cbec109180a705c13ad20bb586c6b67af9a7e508006b5e467b99f948afd" host="ci-4081-3-6-n-6417c65d59" Apr 17 23:29:04.581101 containerd[1486]: 2026-04-17 23:29:04.523 [INFO][4707] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:29:04.581101 containerd[1486]: 2026-04-17 23:29:04.523 [INFO][4707] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.74.197/26] IPv6=[] ContainerID="2bcc4cbec109180a705c13ad20bb586c6b67af9a7e508006b5e467b99f948afd" HandleID="k8s-pod-network.2bcc4cbec109180a705c13ad20bb586c6b67af9a7e508006b5e467b99f948afd" Workload="ci--4081--3--6--n--6417c65d59-k8s-coredns--66bc5c9577--hjmd8-eth0" Apr 17 23:29:04.581664 containerd[1486]: 2026-04-17 23:29:04.527 [INFO][4686] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2bcc4cbec109180a705c13ad20bb586c6b67af9a7e508006b5e467b99f948afd" Namespace="kube-system" Pod="coredns-66bc5c9577-hjmd8" WorkloadEndpoint="ci--4081--3--6--n--6417c65d59-k8s-coredns--66bc5c9577--hjmd8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--6417c65d59-k8s-coredns--66bc5c9577--hjmd8-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"6916156d-2f5c-4714-9bd0-3f37fd1a0dc3", ResourceVersion:"967", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 28, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-6417c65d59", ContainerID:"", Pod:"coredns-66bc5c9577-hjmd8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.74.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3718be877bb", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:29:04.581664 containerd[1486]: 2026-04-17 23:29:04.527 [INFO][4686] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.74.197/32] ContainerID="2bcc4cbec109180a705c13ad20bb586c6b67af9a7e508006b5e467b99f948afd" Namespace="kube-system" Pod="coredns-66bc5c9577-hjmd8" WorkloadEndpoint="ci--4081--3--6--n--6417c65d59-k8s-coredns--66bc5c9577--hjmd8-eth0" Apr 17 23:29:04.581664 containerd[1486]: 2026-04-17 23:29:04.527 [INFO][4686] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3718be877bb ContainerID="2bcc4cbec109180a705c13ad20bb586c6b67af9a7e508006b5e467b99f948afd" Namespace="kube-system" Pod="coredns-66bc5c9577-hjmd8" WorkloadEndpoint="ci--4081--3--6--n--6417c65d59-k8s-coredns--66bc5c9577--hjmd8-eth0" Apr 17 23:29:04.581664 containerd[1486]: 2026-04-17 23:29:04.540 [INFO][4686] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2bcc4cbec109180a705c13ad20bb586c6b67af9a7e508006b5e467b99f948afd" Namespace="kube-system" Pod="coredns-66bc5c9577-hjmd8" WorkloadEndpoint="ci--4081--3--6--n--6417c65d59-k8s-coredns--66bc5c9577--hjmd8-eth0" Apr 17 23:29:04.581798 containerd[1486]: 2026-04-17 23:29:04.543 [INFO][4686] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2bcc4cbec109180a705c13ad20bb586c6b67af9a7e508006b5e467b99f948afd" Namespace="kube-system" Pod="coredns-66bc5c9577-hjmd8" WorkloadEndpoint="ci--4081--3--6--n--6417c65d59-k8s-coredns--66bc5c9577--hjmd8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--6417c65d59-k8s-coredns--66bc5c9577--hjmd8-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"6916156d-2f5c-4714-9bd0-3f37fd1a0dc3", ResourceVersion:"967", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 28, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-6417c65d59", ContainerID:"2bcc4cbec109180a705c13ad20bb586c6b67af9a7e508006b5e467b99f948afd", Pod:"coredns-66bc5c9577-hjmd8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.74.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3718be877bb", MAC:"f2:41:af:7a:7c:93", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:29:04.581798 containerd[1486]: 2026-04-17 23:29:04.577 [INFO][4686] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2bcc4cbec109180a705c13ad20bb586c6b67af9a7e508006b5e467b99f948afd" Namespace="kube-system" Pod="coredns-66bc5c9577-hjmd8" WorkloadEndpoint="ci--4081--3--6--n--6417c65d59-k8s-coredns--66bc5c9577--hjmd8-eth0" Apr 17 23:29:04.609633 containerd[1486]: time="2026-04-17T23:29:04.609327952Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:29:04.609633 containerd[1486]: time="2026-04-17T23:29:04.609434793Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:29:04.609633 containerd[1486]: time="2026-04-17T23:29:04.609451954Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:29:04.609633 containerd[1486]: time="2026-04-17T23:29:04.609545035Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:29:04.641124 systemd[1]: Started cri-containerd-2bcc4cbec109180a705c13ad20bb586c6b67af9a7e508006b5e467b99f948afd.scope - libcontainer container 2bcc4cbec109180a705c13ad20bb586c6b67af9a7e508006b5e467b99f948afd. Apr 17 23:29:04.688114 containerd[1486]: time="2026-04-17T23:29:04.687815953Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-hjmd8,Uid:6916156d-2f5c-4714-9bd0-3f37fd1a0dc3,Namespace:kube-system,Attempt:1,} returns sandbox id \"2bcc4cbec109180a705c13ad20bb586c6b67af9a7e508006b5e467b99f948afd\"" Apr 17 23:29:04.696927 containerd[1486]: time="2026-04-17T23:29:04.696516637Z" level=info msg="CreateContainer within sandbox \"2bcc4cbec109180a705c13ad20bb586c6b67af9a7e508006b5e467b99f948afd\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 17 23:29:04.706692 systemd-networkd[1386]: cali04e7ecec06f: Link UP Apr 17 23:29:04.712168 systemd-networkd[1386]: cali04e7ecec06f: Gained carrier Apr 17 23:29:04.735253 containerd[1486]: time="2026-04-17T23:29:04.733199473Z" level=info msg="CreateContainer within sandbox \"2bcc4cbec109180a705c13ad20bb586c6b67af9a7e508006b5e467b99f948afd\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"266dbf4218c79d33dd1113cd792475684049ecb2c7f6740113997dbc7af356f4\"" Apr 17 23:29:04.735480 containerd[1486]: time="2026-04-17T23:29:04.735454375Z" level=info msg="StartContainer for \"266dbf4218c79d33dd1113cd792475684049ecb2c7f6740113997dbc7af356f4\"" Apr 17 23:29:04.766167 containerd[1486]: 2026-04-17 23:29:04.435 [INFO][4696] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--n--6417c65d59-k8s-coredns--66bc5c9577--tkqp6-eth0 coredns-66bc5c9577- kube-system d866b1df-e265-46c7-a073-f92742fbb2a8 968 0 2026-04-17 23:28:22 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-3-6-n-6417c65d59 coredns-66bc5c9577-tkqp6 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali04e7ecec06f [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="817fc56864e7578ccb12ab85c1ac5641c62c2acd94c2a3f8fd85365bfcb85d51" Namespace="kube-system" Pod="coredns-66bc5c9577-tkqp6" WorkloadEndpoint="ci--4081--3--6--n--6417c65d59-k8s-coredns--66bc5c9577--tkqp6-" Apr 17 23:29:04.766167 containerd[1486]: 2026-04-17 23:29:04.435 [INFO][4696] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="817fc56864e7578ccb12ab85c1ac5641c62c2acd94c2a3f8fd85365bfcb85d51" Namespace="kube-system" Pod="coredns-66bc5c9577-tkqp6" WorkloadEndpoint="ci--4081--3--6--n--6417c65d59-k8s-coredns--66bc5c9577--tkqp6-eth0" Apr 17 23:29:04.766167 containerd[1486]: 2026-04-17 23:29:04.507 [INFO][4717] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="817fc56864e7578ccb12ab85c1ac5641c62c2acd94c2a3f8fd85365bfcb85d51" HandleID="k8s-pod-network.817fc56864e7578ccb12ab85c1ac5641c62c2acd94c2a3f8fd85365bfcb85d51" Workload="ci--4081--3--6--n--6417c65d59-k8s-coredns--66bc5c9577--tkqp6-eth0" Apr 17 23:29:04.766167 containerd[1486]: 2026-04-17 23:29:04.525 [INFO][4717] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="817fc56864e7578ccb12ab85c1ac5641c62c2acd94c2a3f8fd85365bfcb85d51" HandleID="k8s-pod-network.817fc56864e7578ccb12ab85c1ac5641c62c2acd94c2a3f8fd85365bfcb85d51" Workload="ci--4081--3--6--n--6417c65d59-k8s-coredns--66bc5c9577--tkqp6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002fbec0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-3-6-n-6417c65d59", "pod":"coredns-66bc5c9577-tkqp6", "timestamp":"2026-04-17 23:29:04.507206763 +0000 UTC"}, Hostname:"ci-4081-3-6-n-6417c65d59", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x4000390580)} Apr 17 23:29:04.766167 containerd[1486]: 2026-04-17 23:29:04.526 [INFO][4717] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:29:04.766167 containerd[1486]: 2026-04-17 23:29:04.526 [INFO][4717] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:29:04.766167 containerd[1486]: 2026-04-17 23:29:04.526 [INFO][4717] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-n-6417c65d59' Apr 17 23:29:04.766167 containerd[1486]: 2026-04-17 23:29:04.562 [INFO][4717] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.817fc56864e7578ccb12ab85c1ac5641c62c2acd94c2a3f8fd85365bfcb85d51" host="ci-4081-3-6-n-6417c65d59" Apr 17 23:29:04.766167 containerd[1486]: 2026-04-17 23:29:04.588 [INFO][4717] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081-3-6-n-6417c65d59" Apr 17 23:29:04.766167 containerd[1486]: 2026-04-17 23:29:04.607 [INFO][4717] ipam/ipam.go 526: Trying affinity for 192.168.74.192/26 host="ci-4081-3-6-n-6417c65d59" Apr 17 23:29:04.766167 containerd[1486]: 2026-04-17 23:29:04.623 [INFO][4717] ipam/ipam.go 160: Attempting to load block cidr=192.168.74.192/26 host="ci-4081-3-6-n-6417c65d59" Apr 17 23:29:04.766167 containerd[1486]: 2026-04-17 23:29:04.636 [INFO][4717] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.74.192/26 host="ci-4081-3-6-n-6417c65d59" Apr 17 23:29:04.766167 containerd[1486]: 2026-04-17 23:29:04.636 [INFO][4717] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.74.192/26 handle="k8s-pod-network.817fc56864e7578ccb12ab85c1ac5641c62c2acd94c2a3f8fd85365bfcb85d51" host="ci-4081-3-6-n-6417c65d59" Apr 17 23:29:04.766167 containerd[1486]: 2026-04-17 23:29:04.650 [INFO][4717] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.817fc56864e7578ccb12ab85c1ac5641c62c2acd94c2a3f8fd85365bfcb85d51 Apr 17 23:29:04.766167 containerd[1486]: 2026-04-17 23:29:04.664 [INFO][4717] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.74.192/26 handle="k8s-pod-network.817fc56864e7578ccb12ab85c1ac5641c62c2acd94c2a3f8fd85365bfcb85d51" host="ci-4081-3-6-n-6417c65d59" Apr 17 23:29:04.766167 containerd[1486]: 2026-04-17 23:29:04.691 [INFO][4717] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.74.198/26] block=192.168.74.192/26 handle="k8s-pod-network.817fc56864e7578ccb12ab85c1ac5641c62c2acd94c2a3f8fd85365bfcb85d51" host="ci-4081-3-6-n-6417c65d59" Apr 17 23:29:04.766167 containerd[1486]: 2026-04-17 23:29:04.691 [INFO][4717] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.74.198/26] handle="k8s-pod-network.817fc56864e7578ccb12ab85c1ac5641c62c2acd94c2a3f8fd85365bfcb85d51" host="ci-4081-3-6-n-6417c65d59" Apr 17 23:29:04.766167 containerd[1486]: 2026-04-17 23:29:04.691 [INFO][4717] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:29:04.766167 containerd[1486]: 2026-04-17 23:29:04.691 [INFO][4717] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.74.198/26] IPv6=[] ContainerID="817fc56864e7578ccb12ab85c1ac5641c62c2acd94c2a3f8fd85365bfcb85d51" HandleID="k8s-pod-network.817fc56864e7578ccb12ab85c1ac5641c62c2acd94c2a3f8fd85365bfcb85d51" Workload="ci--4081--3--6--n--6417c65d59-k8s-coredns--66bc5c9577--tkqp6-eth0" Apr 17 23:29:04.766670 containerd[1486]: 2026-04-17 23:29:04.696 [INFO][4696] cni-plugin/k8s.go 418: Populated endpoint ContainerID="817fc56864e7578ccb12ab85c1ac5641c62c2acd94c2a3f8fd85365bfcb85d51" Namespace="kube-system" Pod="coredns-66bc5c9577-tkqp6" WorkloadEndpoint="ci--4081--3--6--n--6417c65d59-k8s-coredns--66bc5c9577--tkqp6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--6417c65d59-k8s-coredns--66bc5c9577--tkqp6-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"d866b1df-e265-46c7-a073-f92742fbb2a8", ResourceVersion:"968", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 28, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-6417c65d59", ContainerID:"", Pod:"coredns-66bc5c9577-tkqp6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.74.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali04e7ecec06f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:29:04.766670 containerd[1486]: 2026-04-17 23:29:04.698 [INFO][4696] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.74.198/32] ContainerID="817fc56864e7578ccb12ab85c1ac5641c62c2acd94c2a3f8fd85365bfcb85d51" Namespace="kube-system" Pod="coredns-66bc5c9577-tkqp6" WorkloadEndpoint="ci--4081--3--6--n--6417c65d59-k8s-coredns--66bc5c9577--tkqp6-eth0" Apr 17 23:29:04.766670 containerd[1486]: 2026-04-17 23:29:04.699 [INFO][4696] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali04e7ecec06f ContainerID="817fc56864e7578ccb12ab85c1ac5641c62c2acd94c2a3f8fd85365bfcb85d51" Namespace="kube-system" Pod="coredns-66bc5c9577-tkqp6" WorkloadEndpoint="ci--4081--3--6--n--6417c65d59-k8s-coredns--66bc5c9577--tkqp6-eth0" Apr 17 23:29:04.766670 containerd[1486]: 2026-04-17 23:29:04.715 [INFO][4696] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="817fc56864e7578ccb12ab85c1ac5641c62c2acd94c2a3f8fd85365bfcb85d51" Namespace="kube-system" Pod="coredns-66bc5c9577-tkqp6" WorkloadEndpoint="ci--4081--3--6--n--6417c65d59-k8s-coredns--66bc5c9577--tkqp6-eth0" Apr 17 23:29:04.766786 containerd[1486]: 2026-04-17 23:29:04.722 [INFO][4696] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="817fc56864e7578ccb12ab85c1ac5641c62c2acd94c2a3f8fd85365bfcb85d51" Namespace="kube-system" Pod="coredns-66bc5c9577-tkqp6" WorkloadEndpoint="ci--4081--3--6--n--6417c65d59-k8s-coredns--66bc5c9577--tkqp6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--6417c65d59-k8s-coredns--66bc5c9577--tkqp6-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"d866b1df-e265-46c7-a073-f92742fbb2a8", ResourceVersion:"968", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 28, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-6417c65d59", ContainerID:"817fc56864e7578ccb12ab85c1ac5641c62c2acd94c2a3f8fd85365bfcb85d51", Pod:"coredns-66bc5c9577-tkqp6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.74.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali04e7ecec06f", MAC:"fa:04:22:8c:d8:23", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:29:04.766786 containerd[1486]: 2026-04-17 23:29:04.758 [INFO][4696] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="817fc56864e7578ccb12ab85c1ac5641c62c2acd94c2a3f8fd85365bfcb85d51" Namespace="kube-system" Pod="coredns-66bc5c9577-tkqp6" WorkloadEndpoint="ci--4081--3--6--n--6417c65d59-k8s-coredns--66bc5c9577--tkqp6-eth0" Apr 17 23:29:04.772117 systemd[1]: Started cri-containerd-266dbf4218c79d33dd1113cd792475684049ecb2c7f6740113997dbc7af356f4.scope - libcontainer container 266dbf4218c79d33dd1113cd792475684049ecb2c7f6740113997dbc7af356f4. Apr 17 23:29:04.819524 containerd[1486]: time="2026-04-17T23:29:04.819159746Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:29:04.819524 containerd[1486]: time="2026-04-17T23:29:04.819233867Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:29:04.819524 containerd[1486]: time="2026-04-17T23:29:04.819248467Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:29:04.819524 containerd[1486]: time="2026-04-17T23:29:04.819340708Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:29:04.839302 containerd[1486]: time="2026-04-17T23:29:04.839181540Z" level=info msg="StartContainer for \"266dbf4218c79d33dd1113cd792475684049ecb2c7f6740113997dbc7af356f4\" returns successfully" Apr 17 23:29:04.904196 systemd[1]: Started cri-containerd-817fc56864e7578ccb12ab85c1ac5641c62c2acd94c2a3f8fd85365bfcb85d51.scope - libcontainer container 817fc56864e7578ccb12ab85c1ac5641c62c2acd94c2a3f8fd85365bfcb85d51. Apr 17 23:29:04.953413 containerd[1486]: time="2026-04-17T23:29:04.953367927Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-tkqp6,Uid:d866b1df-e265-46c7-a073-f92742fbb2a8,Namespace:kube-system,Attempt:1,} returns sandbox id \"817fc56864e7578ccb12ab85c1ac5641c62c2acd94c2a3f8fd85365bfcb85d51\"" Apr 17 23:29:04.961502 containerd[1486]: time="2026-04-17T23:29:04.961457365Z" level=info msg="CreateContainer within sandbox \"817fc56864e7578ccb12ab85c1ac5641c62c2acd94c2a3f8fd85365bfcb85d51\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 17 23:29:04.981380 containerd[1486]: time="2026-04-17T23:29:04.981316798Z" level=info msg="CreateContainer within sandbox \"817fc56864e7578ccb12ab85c1ac5641c62c2acd94c2a3f8fd85365bfcb85d51\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c4eacd690110a7adc238c1bb7ba91eb5233e5bb369da30d3cf43985ccdcbb971\"" Apr 17 23:29:04.982261 containerd[1486]: time="2026-04-17T23:29:04.982194726Z" level=info msg="StartContainer for \"c4eacd690110a7adc238c1bb7ba91eb5233e5bb369da30d3cf43985ccdcbb971\"" Apr 17 23:29:05.015135 systemd[1]: Started cri-containerd-c4eacd690110a7adc238c1bb7ba91eb5233e5bb369da30d3cf43985ccdcbb971.scope - libcontainer container c4eacd690110a7adc238c1bb7ba91eb5233e5bb369da30d3cf43985ccdcbb971. Apr 17 23:29:05.058308 containerd[1486]: time="2026-04-17T23:29:05.058229108Z" level=info msg="StartContainer for \"c4eacd690110a7adc238c1bb7ba91eb5233e5bb369da30d3cf43985ccdcbb971\" returns successfully" Apr 17 23:29:05.067771 containerd[1486]: time="2026-04-17T23:29:05.067646274Z" level=info msg="StopPodSandbox for \"96dea844bc0b8d926120d6c8634e879d57072b5e43e47b04ebf3c1c9792c60bc\"" Apr 17 23:29:05.227134 containerd[1486]: 2026-04-17 23:29:05.164 [INFO][4905] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="96dea844bc0b8d926120d6c8634e879d57072b5e43e47b04ebf3c1c9792c60bc" Apr 17 23:29:05.227134 containerd[1486]: 2026-04-17 23:29:05.165 [INFO][4905] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="96dea844bc0b8d926120d6c8634e879d57072b5e43e47b04ebf3c1c9792c60bc" iface="eth0" netns="/var/run/netns/cni-cf2c135d-f1ab-07fc-c97c-aaf0bce6944f" Apr 17 23:29:05.227134 containerd[1486]: 2026-04-17 23:29:05.166 [INFO][4905] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="96dea844bc0b8d926120d6c8634e879d57072b5e43e47b04ebf3c1c9792c60bc" iface="eth0" netns="/var/run/netns/cni-cf2c135d-f1ab-07fc-c97c-aaf0bce6944f" Apr 17 23:29:05.227134 containerd[1486]: 2026-04-17 23:29:05.166 [INFO][4905] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="96dea844bc0b8d926120d6c8634e879d57072b5e43e47b04ebf3c1c9792c60bc" iface="eth0" netns="/var/run/netns/cni-cf2c135d-f1ab-07fc-c97c-aaf0bce6944f" Apr 17 23:29:05.227134 containerd[1486]: 2026-04-17 23:29:05.166 [INFO][4905] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="96dea844bc0b8d926120d6c8634e879d57072b5e43e47b04ebf3c1c9792c60bc" Apr 17 23:29:05.227134 containerd[1486]: 2026-04-17 23:29:05.166 [INFO][4905] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="96dea844bc0b8d926120d6c8634e879d57072b5e43e47b04ebf3c1c9792c60bc" Apr 17 23:29:05.227134 containerd[1486]: 2026-04-17 23:29:05.199 [INFO][4917] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="96dea844bc0b8d926120d6c8634e879d57072b5e43e47b04ebf3c1c9792c60bc" HandleID="k8s-pod-network.96dea844bc0b8d926120d6c8634e879d57072b5e43e47b04ebf3c1c9792c60bc" Workload="ci--4081--3--6--n--6417c65d59-k8s-goldmane--cccfbd5cf--kds4v-eth0" Apr 17 23:29:05.227134 containerd[1486]: 2026-04-17 23:29:05.199 [INFO][4917] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:29:05.227134 containerd[1486]: 2026-04-17 23:29:05.199 [INFO][4917] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:29:05.227134 containerd[1486]: 2026-04-17 23:29:05.218 [WARNING][4917] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="96dea844bc0b8d926120d6c8634e879d57072b5e43e47b04ebf3c1c9792c60bc" HandleID="k8s-pod-network.96dea844bc0b8d926120d6c8634e879d57072b5e43e47b04ebf3c1c9792c60bc" Workload="ci--4081--3--6--n--6417c65d59-k8s-goldmane--cccfbd5cf--kds4v-eth0" Apr 17 23:29:05.227134 containerd[1486]: 2026-04-17 23:29:05.218 [INFO][4917] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="96dea844bc0b8d926120d6c8634e879d57072b5e43e47b04ebf3c1c9792c60bc" HandleID="k8s-pod-network.96dea844bc0b8d926120d6c8634e879d57072b5e43e47b04ebf3c1c9792c60bc" Workload="ci--4081--3--6--n--6417c65d59-k8s-goldmane--cccfbd5cf--kds4v-eth0" Apr 17 23:29:05.227134 containerd[1486]: 2026-04-17 23:29:05.221 [INFO][4917] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:29:05.227134 containerd[1486]: 2026-04-17 23:29:05.224 [INFO][4905] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="96dea844bc0b8d926120d6c8634e879d57072b5e43e47b04ebf3c1c9792c60bc" Apr 17 23:29:05.227578 containerd[1486]: time="2026-04-17T23:29:05.227295365Z" level=info msg="TearDown network for sandbox \"96dea844bc0b8d926120d6c8634e879d57072b5e43e47b04ebf3c1c9792c60bc\" successfully" Apr 17 23:29:05.227578 containerd[1486]: time="2026-04-17T23:29:05.227320285Z" level=info msg="StopPodSandbox for \"96dea844bc0b8d926120d6c8634e879d57072b5e43e47b04ebf3c1c9792c60bc\" returns successfully" Apr 17 23:29:05.230622 containerd[1486]: time="2026-04-17T23:29:05.230570874Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-cccfbd5cf-kds4v,Uid:8ec0d86c-9bed-4b8a-bac2-d5b0be9c767e,Namespace:calico-system,Attempt:1,}" Apr 17 23:29:05.245733 systemd[1]: run-netns-cni\x2dcf2c135d\x2df1ab\x2d07fc\x2dc97c\x2daaf0bce6944f.mount: Deactivated successfully. Apr 17 23:29:05.465907 kubelet[2529]: I0417 23:29:05.464731 2529 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-hjmd8" podStartSLOduration=43.464712842 podStartE2EDuration="43.464712842s" podCreationTimestamp="2026-04-17 23:28:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-17 23:29:05.463308349 +0000 UTC m=+50.535986640" watchObservedRunningTime="2026-04-17 23:29:05.464712842 +0000 UTC m=+50.537391093" Apr 17 23:29:05.496954 systemd-networkd[1386]: cali1468a549822: Link UP Apr 17 23:29:05.500395 kubelet[2529]: I0417 23:29:05.498112 2529 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-tkqp6" podStartSLOduration=43.498093185 podStartE2EDuration="43.498093185s" podCreationTimestamp="2026-04-17 23:28:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-17 23:29:05.497672941 +0000 UTC m=+50.570351272" watchObservedRunningTime="2026-04-17 23:29:05.498093185 +0000 UTC m=+50.570771476" Apr 17 23:29:05.497976 systemd-networkd[1386]: cali1468a549822: Gained carrier Apr 17 23:29:05.527037 containerd[1486]: 2026-04-17 23:29:05.350 [INFO][4928] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--n--6417c65d59-k8s-goldmane--cccfbd5cf--kds4v-eth0 goldmane-cccfbd5cf- calico-system 8ec0d86c-9bed-4b8a-bac2-d5b0be9c767e 989 0 2026-04-17 23:28:34 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:cccfbd5cf projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4081-3-6-n-6417c65d59 goldmane-cccfbd5cf-kds4v eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali1468a549822 [] [] }} ContainerID="228a512fe399734ac918e2cc3086d9115ff5ce555ad3e563cd19000c981b44e9" Namespace="calico-system" Pod="goldmane-cccfbd5cf-kds4v" WorkloadEndpoint="ci--4081--3--6--n--6417c65d59-k8s-goldmane--cccfbd5cf--kds4v-" Apr 17 23:29:05.527037 containerd[1486]: 2026-04-17 23:29:05.351 [INFO][4928] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="228a512fe399734ac918e2cc3086d9115ff5ce555ad3e563cd19000c981b44e9" Namespace="calico-system" Pod="goldmane-cccfbd5cf-kds4v" WorkloadEndpoint="ci--4081--3--6--n--6417c65d59-k8s-goldmane--cccfbd5cf--kds4v-eth0" Apr 17 23:29:05.527037 containerd[1486]: 2026-04-17 23:29:05.384 [INFO][4951] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="228a512fe399734ac918e2cc3086d9115ff5ce555ad3e563cd19000c981b44e9" HandleID="k8s-pod-network.228a512fe399734ac918e2cc3086d9115ff5ce555ad3e563cd19000c981b44e9" Workload="ci--4081--3--6--n--6417c65d59-k8s-goldmane--cccfbd5cf--kds4v-eth0" Apr 17 23:29:05.527037 containerd[1486]: 2026-04-17 23:29:05.398 [INFO][4951] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="228a512fe399734ac918e2cc3086d9115ff5ce555ad3e563cd19000c981b44e9" HandleID="k8s-pod-network.228a512fe399734ac918e2cc3086d9115ff5ce555ad3e563cd19000c981b44e9" Workload="ci--4081--3--6--n--6417c65d59-k8s-goldmane--cccfbd5cf--kds4v-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002f74b0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-6-n-6417c65d59", "pod":"goldmane-cccfbd5cf-kds4v", "timestamp":"2026-04-17 23:29:05.384671035 +0000 UTC"}, Hostname:"ci-4081-3-6-n-6417c65d59", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x400020adc0)} Apr 17 23:29:05.527037 containerd[1486]: 2026-04-17 23:29:05.399 [INFO][4951] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:29:05.527037 containerd[1486]: 2026-04-17 23:29:05.399 [INFO][4951] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:29:05.527037 containerd[1486]: 2026-04-17 23:29:05.399 [INFO][4951] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-n-6417c65d59' Apr 17 23:29:05.527037 containerd[1486]: 2026-04-17 23:29:05.402 [INFO][4951] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.228a512fe399734ac918e2cc3086d9115ff5ce555ad3e563cd19000c981b44e9" host="ci-4081-3-6-n-6417c65d59" Apr 17 23:29:05.527037 containerd[1486]: 2026-04-17 23:29:05.411 [INFO][4951] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081-3-6-n-6417c65d59" Apr 17 23:29:05.527037 containerd[1486]: 2026-04-17 23:29:05.424 [INFO][4951] ipam/ipam.go 526: Trying affinity for 192.168.74.192/26 host="ci-4081-3-6-n-6417c65d59" Apr 17 23:29:05.527037 containerd[1486]: 2026-04-17 23:29:05.436 [INFO][4951] ipam/ipam.go 160: Attempting to load block cidr=192.168.74.192/26 host="ci-4081-3-6-n-6417c65d59" Apr 17 23:29:05.527037 containerd[1486]: 2026-04-17 23:29:05.447 [INFO][4951] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.74.192/26 host="ci-4081-3-6-n-6417c65d59" Apr 17 23:29:05.527037 containerd[1486]: 2026-04-17 23:29:05.447 [INFO][4951] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.74.192/26 handle="k8s-pod-network.228a512fe399734ac918e2cc3086d9115ff5ce555ad3e563cd19000c981b44e9" host="ci-4081-3-6-n-6417c65d59" Apr 17 23:29:05.527037 containerd[1486]: 2026-04-17 23:29:05.452 [INFO][4951] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.228a512fe399734ac918e2cc3086d9115ff5ce555ad3e563cd19000c981b44e9 Apr 17 23:29:05.527037 containerd[1486]: 2026-04-17 23:29:05.461 [INFO][4951] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.74.192/26 handle="k8s-pod-network.228a512fe399734ac918e2cc3086d9115ff5ce555ad3e563cd19000c981b44e9" host="ci-4081-3-6-n-6417c65d59" Apr 17 23:29:05.527037 containerd[1486]: 2026-04-17 23:29:05.486 [INFO][4951] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.74.199/26] block=192.168.74.192/26 handle="k8s-pod-network.228a512fe399734ac918e2cc3086d9115ff5ce555ad3e563cd19000c981b44e9" host="ci-4081-3-6-n-6417c65d59" Apr 17 23:29:05.527037 containerd[1486]: 2026-04-17 23:29:05.486 [INFO][4951] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.74.199/26] handle="k8s-pod-network.228a512fe399734ac918e2cc3086d9115ff5ce555ad3e563cd19000c981b44e9" host="ci-4081-3-6-n-6417c65d59" Apr 17 23:29:05.527037 containerd[1486]: 2026-04-17 23:29:05.486 [INFO][4951] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:29:05.527037 containerd[1486]: 2026-04-17 23:29:05.486 [INFO][4951] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.74.199/26] IPv6=[] ContainerID="228a512fe399734ac918e2cc3086d9115ff5ce555ad3e563cd19000c981b44e9" HandleID="k8s-pod-network.228a512fe399734ac918e2cc3086d9115ff5ce555ad3e563cd19000c981b44e9" Workload="ci--4081--3--6--n--6417c65d59-k8s-goldmane--cccfbd5cf--kds4v-eth0" Apr 17 23:29:05.528000 containerd[1486]: 2026-04-17 23:29:05.490 [INFO][4928] cni-plugin/k8s.go 418: Populated endpoint ContainerID="228a512fe399734ac918e2cc3086d9115ff5ce555ad3e563cd19000c981b44e9" Namespace="calico-system" Pod="goldmane-cccfbd5cf-kds4v" WorkloadEndpoint="ci--4081--3--6--n--6417c65d59-k8s-goldmane--cccfbd5cf--kds4v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--6417c65d59-k8s-goldmane--cccfbd5cf--kds4v-eth0", GenerateName:"goldmane-cccfbd5cf-", Namespace:"calico-system", SelfLink:"", UID:"8ec0d86c-9bed-4b8a-bac2-d5b0be9c767e", ResourceVersion:"989", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 28, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"cccfbd5cf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-6417c65d59", ContainerID:"", Pod:"goldmane-cccfbd5cf-kds4v", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.74.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali1468a549822", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:29:05.528000 containerd[1486]: 2026-04-17 23:29:05.491 [INFO][4928] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.74.199/32] ContainerID="228a512fe399734ac918e2cc3086d9115ff5ce555ad3e563cd19000c981b44e9" Namespace="calico-system" Pod="goldmane-cccfbd5cf-kds4v" WorkloadEndpoint="ci--4081--3--6--n--6417c65d59-k8s-goldmane--cccfbd5cf--kds4v-eth0" Apr 17 23:29:05.528000 containerd[1486]: 2026-04-17 23:29:05.492 [INFO][4928] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1468a549822 ContainerID="228a512fe399734ac918e2cc3086d9115ff5ce555ad3e563cd19000c981b44e9" Namespace="calico-system" Pod="goldmane-cccfbd5cf-kds4v" WorkloadEndpoint="ci--4081--3--6--n--6417c65d59-k8s-goldmane--cccfbd5cf--kds4v-eth0" Apr 17 23:29:05.528000 containerd[1486]: 2026-04-17 23:29:05.498 [INFO][4928] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="228a512fe399734ac918e2cc3086d9115ff5ce555ad3e563cd19000c981b44e9" Namespace="calico-system" Pod="goldmane-cccfbd5cf-kds4v" WorkloadEndpoint="ci--4081--3--6--n--6417c65d59-k8s-goldmane--cccfbd5cf--kds4v-eth0" Apr 17 23:29:05.528000 containerd[1486]: 2026-04-17 23:29:05.499 [INFO][4928] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="228a512fe399734ac918e2cc3086d9115ff5ce555ad3e563cd19000c981b44e9" Namespace="calico-system" Pod="goldmane-cccfbd5cf-kds4v" WorkloadEndpoint="ci--4081--3--6--n--6417c65d59-k8s-goldmane--cccfbd5cf--kds4v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--6417c65d59-k8s-goldmane--cccfbd5cf--kds4v-eth0", GenerateName:"goldmane-cccfbd5cf-", Namespace:"calico-system", SelfLink:"", UID:"8ec0d86c-9bed-4b8a-bac2-d5b0be9c767e", ResourceVersion:"989", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 28, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"cccfbd5cf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-6417c65d59", ContainerID:"228a512fe399734ac918e2cc3086d9115ff5ce555ad3e563cd19000c981b44e9", Pod:"goldmane-cccfbd5cf-kds4v", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.74.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali1468a549822", MAC:"aa:88:f2:2a:4f:bf", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:29:05.528000 containerd[1486]: 2026-04-17 23:29:05.523 [INFO][4928] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="228a512fe399734ac918e2cc3086d9115ff5ce555ad3e563cd19000c981b44e9" Namespace="calico-system" Pod="goldmane-cccfbd5cf-kds4v" WorkloadEndpoint="ci--4081--3--6--n--6417c65d59-k8s-goldmane--cccfbd5cf--kds4v-eth0" Apr 17 23:29:05.594803 containerd[1486]: time="2026-04-17T23:29:05.594541101Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:29:05.594803 containerd[1486]: time="2026-04-17T23:29:05.594607582Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:29:05.594803 containerd[1486]: time="2026-04-17T23:29:05.594623262Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:29:05.594803 containerd[1486]: time="2026-04-17T23:29:05.594754503Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:29:05.643112 systemd[1]: Started cri-containerd-228a512fe399734ac918e2cc3086d9115ff5ce555ad3e563cd19000c981b44e9.scope - libcontainer container 228a512fe399734ac918e2cc3086d9115ff5ce555ad3e563cd19000c981b44e9. Apr 17 23:29:05.710401 containerd[1486]: time="2026-04-17T23:29:05.710357834Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-cccfbd5cf-kds4v,Uid:8ec0d86c-9bed-4b8a-bac2-d5b0be9c767e,Namespace:calico-system,Attempt:1,} returns sandbox id \"228a512fe399734ac918e2cc3086d9115ff5ce555ad3e563cd19000c981b44e9\"" Apr 17 23:29:05.715724 containerd[1486]: time="2026-04-17T23:29:05.714754874Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\"" Apr 17 23:29:05.911095 systemd-networkd[1386]: cali3718be877bb: Gained IPv6LL Apr 17 23:29:06.066099 containerd[1486]: time="2026-04-17T23:29:06.065714106Z" level=info msg="StopPodSandbox for \"0505aa1b617d0a50122e35b64b7ef4bfa35ffa96bfb8f767a87a3b97825b771a\"" Apr 17 23:29:06.224947 containerd[1486]: 2026-04-17 23:29:06.157 [INFO][5031] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="0505aa1b617d0a50122e35b64b7ef4bfa35ffa96bfb8f767a87a3b97825b771a" Apr 17 23:29:06.224947 containerd[1486]: 2026-04-17 23:29:06.158 [INFO][5031] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="0505aa1b617d0a50122e35b64b7ef4bfa35ffa96bfb8f767a87a3b97825b771a" iface="eth0" netns="/var/run/netns/cni-bd225064-bd60-7069-d8e5-2ad1b31ffb35" Apr 17 23:29:06.224947 containerd[1486]: 2026-04-17 23:29:06.158 [INFO][5031] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="0505aa1b617d0a50122e35b64b7ef4bfa35ffa96bfb8f767a87a3b97825b771a" iface="eth0" netns="/var/run/netns/cni-bd225064-bd60-7069-d8e5-2ad1b31ffb35" Apr 17 23:29:06.224947 containerd[1486]: 2026-04-17 23:29:06.158 [INFO][5031] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="0505aa1b617d0a50122e35b64b7ef4bfa35ffa96bfb8f767a87a3b97825b771a" iface="eth0" netns="/var/run/netns/cni-bd225064-bd60-7069-d8e5-2ad1b31ffb35" Apr 17 23:29:06.224947 containerd[1486]: 2026-04-17 23:29:06.158 [INFO][5031] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="0505aa1b617d0a50122e35b64b7ef4bfa35ffa96bfb8f767a87a3b97825b771a" Apr 17 23:29:06.224947 containerd[1486]: 2026-04-17 23:29:06.159 [INFO][5031] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="0505aa1b617d0a50122e35b64b7ef4bfa35ffa96bfb8f767a87a3b97825b771a" Apr 17 23:29:06.224947 containerd[1486]: 2026-04-17 23:29:06.196 [INFO][5038] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="0505aa1b617d0a50122e35b64b7ef4bfa35ffa96bfb8f767a87a3b97825b771a" HandleID="k8s-pod-network.0505aa1b617d0a50122e35b64b7ef4bfa35ffa96bfb8f767a87a3b97825b771a" Workload="ci--4081--3--6--n--6417c65d59-k8s-calico--kube--controllers--6cfcc84fc9--h7qh6-eth0" Apr 17 23:29:06.224947 containerd[1486]: 2026-04-17 23:29:06.196 [INFO][5038] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:29:06.224947 containerd[1486]: 2026-04-17 23:29:06.196 [INFO][5038] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:29:06.224947 containerd[1486]: 2026-04-17 23:29:06.214 [WARNING][5038] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="0505aa1b617d0a50122e35b64b7ef4bfa35ffa96bfb8f767a87a3b97825b771a" HandleID="k8s-pod-network.0505aa1b617d0a50122e35b64b7ef4bfa35ffa96bfb8f767a87a3b97825b771a" Workload="ci--4081--3--6--n--6417c65d59-k8s-calico--kube--controllers--6cfcc84fc9--h7qh6-eth0" Apr 17 23:29:06.224947 containerd[1486]: 2026-04-17 23:29:06.214 [INFO][5038] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="0505aa1b617d0a50122e35b64b7ef4bfa35ffa96bfb8f767a87a3b97825b771a" HandleID="k8s-pod-network.0505aa1b617d0a50122e35b64b7ef4bfa35ffa96bfb8f767a87a3b97825b771a" Workload="ci--4081--3--6--n--6417c65d59-k8s-calico--kube--controllers--6cfcc84fc9--h7qh6-eth0" Apr 17 23:29:06.224947 containerd[1486]: 2026-04-17 23:29:06.218 [INFO][5038] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:29:06.224947 containerd[1486]: 2026-04-17 23:29:06.221 [INFO][5031] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="0505aa1b617d0a50122e35b64b7ef4bfa35ffa96bfb8f767a87a3b97825b771a" Apr 17 23:29:06.228938 containerd[1486]: time="2026-04-17T23:29:06.225964751Z" level=info msg="TearDown network for sandbox \"0505aa1b617d0a50122e35b64b7ef4bfa35ffa96bfb8f767a87a3b97825b771a\" successfully" Apr 17 23:29:06.228938 containerd[1486]: time="2026-04-17T23:29:06.226034312Z" level=info msg="StopPodSandbox for \"0505aa1b617d0a50122e35b64b7ef4bfa35ffa96bfb8f767a87a3b97825b771a\" returns successfully" Apr 17 23:29:06.229709 containerd[1486]: time="2026-04-17T23:29:06.229662302Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6cfcc84fc9-h7qh6,Uid:48917d2d-23fd-499e-a8bf-6a46380b5f6d,Namespace:calico-system,Attempt:1,}" Apr 17 23:29:06.234142 systemd[1]: run-netns-cni\x2dbd225064\x2dbd60\x2d7069\x2dd8e5\x2d2ad1b31ffb35.mount: Deactivated successfully. Apr 17 23:29:06.431919 systemd-networkd[1386]: calide46a1cba51: Link UP Apr 17 23:29:06.433006 systemd-networkd[1386]: calide46a1cba51: Gained carrier Apr 17 23:29:06.458933 containerd[1486]: 2026-04-17 23:29:06.325 [INFO][5046] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--n--6417c65d59-k8s-calico--kube--controllers--6cfcc84fc9--h7qh6-eth0 calico-kube-controllers-6cfcc84fc9- calico-system 48917d2d-23fd-499e-a8bf-6a46380b5f6d 1007 0 2026-04-17 23:28:37 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:6cfcc84fc9 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081-3-6-n-6417c65d59 calico-kube-controllers-6cfcc84fc9-h7qh6 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calide46a1cba51 [] [] }} ContainerID="25604b2039f46c34bcd40f10dd36e6856eb70d21ba07b2c055611a28ebcf00dc" Namespace="calico-system" Pod="calico-kube-controllers-6cfcc84fc9-h7qh6" WorkloadEndpoint="ci--4081--3--6--n--6417c65d59-k8s-calico--kube--controllers--6cfcc84fc9--h7qh6-" Apr 17 23:29:06.458933 containerd[1486]: 2026-04-17 23:29:06.325 [INFO][5046] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="25604b2039f46c34bcd40f10dd36e6856eb70d21ba07b2c055611a28ebcf00dc" Namespace="calico-system" Pod="calico-kube-controllers-6cfcc84fc9-h7qh6" WorkloadEndpoint="ci--4081--3--6--n--6417c65d59-k8s-calico--kube--controllers--6cfcc84fc9--h7qh6-eth0" Apr 17 23:29:06.458933 containerd[1486]: 2026-04-17 23:29:06.362 [INFO][5058] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="25604b2039f46c34bcd40f10dd36e6856eb70d21ba07b2c055611a28ebcf00dc" HandleID="k8s-pod-network.25604b2039f46c34bcd40f10dd36e6856eb70d21ba07b2c055611a28ebcf00dc" Workload="ci--4081--3--6--n--6417c65d59-k8s-calico--kube--controllers--6cfcc84fc9--h7qh6-eth0" Apr 17 23:29:06.458933 containerd[1486]: 2026-04-17 23:29:06.375 [INFO][5058] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="25604b2039f46c34bcd40f10dd36e6856eb70d21ba07b2c055611a28ebcf00dc" HandleID="k8s-pod-network.25604b2039f46c34bcd40f10dd36e6856eb70d21ba07b2c055611a28ebcf00dc" Workload="ci--4081--3--6--n--6417c65d59-k8s-calico--kube--controllers--6cfcc84fc9--h7qh6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002ed4b0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-6-n-6417c65d59", "pod":"calico-kube-controllers-6cfcc84fc9-h7qh6", "timestamp":"2026-04-17 23:29:06.362914998 +0000 UTC"}, Hostname:"ci-4081-3-6-n-6417c65d59", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x400024f080)} Apr 17 23:29:06.458933 containerd[1486]: 2026-04-17 23:29:06.375 [INFO][5058] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:29:06.458933 containerd[1486]: 2026-04-17 23:29:06.375 [INFO][5058] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:29:06.458933 containerd[1486]: 2026-04-17 23:29:06.375 [INFO][5058] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-n-6417c65d59' Apr 17 23:29:06.458933 containerd[1486]: 2026-04-17 23:29:06.379 [INFO][5058] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.25604b2039f46c34bcd40f10dd36e6856eb70d21ba07b2c055611a28ebcf00dc" host="ci-4081-3-6-n-6417c65d59" Apr 17 23:29:06.458933 containerd[1486]: 2026-04-17 23:29:06.388 [INFO][5058] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081-3-6-n-6417c65d59" Apr 17 23:29:06.458933 containerd[1486]: 2026-04-17 23:29:06.396 [INFO][5058] ipam/ipam.go 526: Trying affinity for 192.168.74.192/26 host="ci-4081-3-6-n-6417c65d59" Apr 17 23:29:06.458933 containerd[1486]: 2026-04-17 23:29:06.399 [INFO][5058] ipam/ipam.go 160: Attempting to load block cidr=192.168.74.192/26 host="ci-4081-3-6-n-6417c65d59" Apr 17 23:29:06.458933 containerd[1486]: 2026-04-17 23:29:06.402 [INFO][5058] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.74.192/26 host="ci-4081-3-6-n-6417c65d59" Apr 17 23:29:06.458933 containerd[1486]: 2026-04-17 23:29:06.402 [INFO][5058] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.74.192/26 handle="k8s-pod-network.25604b2039f46c34bcd40f10dd36e6856eb70d21ba07b2c055611a28ebcf00dc" host="ci-4081-3-6-n-6417c65d59" Apr 17 23:29:06.458933 containerd[1486]: 2026-04-17 23:29:06.405 [INFO][5058] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.25604b2039f46c34bcd40f10dd36e6856eb70d21ba07b2c055611a28ebcf00dc Apr 17 23:29:06.458933 containerd[1486]: 2026-04-17 23:29:06.411 [INFO][5058] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.74.192/26 handle="k8s-pod-network.25604b2039f46c34bcd40f10dd36e6856eb70d21ba07b2c055611a28ebcf00dc" host="ci-4081-3-6-n-6417c65d59" Apr 17 23:29:06.458933 containerd[1486]: 2026-04-17 23:29:06.421 [INFO][5058] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.74.200/26] block=192.168.74.192/26 handle="k8s-pod-network.25604b2039f46c34bcd40f10dd36e6856eb70d21ba07b2c055611a28ebcf00dc" host="ci-4081-3-6-n-6417c65d59" Apr 17 23:29:06.458933 containerd[1486]: 2026-04-17 23:29:06.421 [INFO][5058] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.74.200/26] handle="k8s-pod-network.25604b2039f46c34bcd40f10dd36e6856eb70d21ba07b2c055611a28ebcf00dc" host="ci-4081-3-6-n-6417c65d59" Apr 17 23:29:06.458933 containerd[1486]: 2026-04-17 23:29:06.421 [INFO][5058] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:29:06.458933 containerd[1486]: 2026-04-17 23:29:06.421 [INFO][5058] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.74.200/26] IPv6=[] ContainerID="25604b2039f46c34bcd40f10dd36e6856eb70d21ba07b2c055611a28ebcf00dc" HandleID="k8s-pod-network.25604b2039f46c34bcd40f10dd36e6856eb70d21ba07b2c055611a28ebcf00dc" Workload="ci--4081--3--6--n--6417c65d59-k8s-calico--kube--controllers--6cfcc84fc9--h7qh6-eth0" Apr 17 23:29:06.461444 containerd[1486]: 2026-04-17 23:29:06.424 [INFO][5046] cni-plugin/k8s.go 418: Populated endpoint ContainerID="25604b2039f46c34bcd40f10dd36e6856eb70d21ba07b2c055611a28ebcf00dc" Namespace="calico-system" Pod="calico-kube-controllers-6cfcc84fc9-h7qh6" WorkloadEndpoint="ci--4081--3--6--n--6417c65d59-k8s-calico--kube--controllers--6cfcc84fc9--h7qh6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--6417c65d59-k8s-calico--kube--controllers--6cfcc84fc9--h7qh6-eth0", GenerateName:"calico-kube-controllers-6cfcc84fc9-", Namespace:"calico-system", SelfLink:"", UID:"48917d2d-23fd-499e-a8bf-6a46380b5f6d", ResourceVersion:"1007", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 28, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6cfcc84fc9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-6417c65d59", ContainerID:"", Pod:"calico-kube-controllers-6cfcc84fc9-h7qh6", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.74.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calide46a1cba51", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:29:06.461444 containerd[1486]: 2026-04-17 23:29:06.424 [INFO][5046] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.74.200/32] ContainerID="25604b2039f46c34bcd40f10dd36e6856eb70d21ba07b2c055611a28ebcf00dc" Namespace="calico-system" Pod="calico-kube-controllers-6cfcc84fc9-h7qh6" WorkloadEndpoint="ci--4081--3--6--n--6417c65d59-k8s-calico--kube--controllers--6cfcc84fc9--h7qh6-eth0" Apr 17 23:29:06.461444 containerd[1486]: 2026-04-17 23:29:06.424 [INFO][5046] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calide46a1cba51 ContainerID="25604b2039f46c34bcd40f10dd36e6856eb70d21ba07b2c055611a28ebcf00dc" Namespace="calico-system" Pod="calico-kube-controllers-6cfcc84fc9-h7qh6" WorkloadEndpoint="ci--4081--3--6--n--6417c65d59-k8s-calico--kube--controllers--6cfcc84fc9--h7qh6-eth0" Apr 17 23:29:06.461444 containerd[1486]: 2026-04-17 23:29:06.435 [INFO][5046] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="25604b2039f46c34bcd40f10dd36e6856eb70d21ba07b2c055611a28ebcf00dc" Namespace="calico-system" Pod="calico-kube-controllers-6cfcc84fc9-h7qh6" WorkloadEndpoint="ci--4081--3--6--n--6417c65d59-k8s-calico--kube--controllers--6cfcc84fc9--h7qh6-eth0" Apr 17 23:29:06.461444 containerd[1486]: 2026-04-17 23:29:06.435 [INFO][5046] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="25604b2039f46c34bcd40f10dd36e6856eb70d21ba07b2c055611a28ebcf00dc" Namespace="calico-system" Pod="calico-kube-controllers-6cfcc84fc9-h7qh6" WorkloadEndpoint="ci--4081--3--6--n--6417c65d59-k8s-calico--kube--controllers--6cfcc84fc9--h7qh6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--6417c65d59-k8s-calico--kube--controllers--6cfcc84fc9--h7qh6-eth0", GenerateName:"calico-kube-controllers-6cfcc84fc9-", Namespace:"calico-system", SelfLink:"", UID:"48917d2d-23fd-499e-a8bf-6a46380b5f6d", ResourceVersion:"1007", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 28, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6cfcc84fc9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-6417c65d59", ContainerID:"25604b2039f46c34bcd40f10dd36e6856eb70d21ba07b2c055611a28ebcf00dc", Pod:"calico-kube-controllers-6cfcc84fc9-h7qh6", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.74.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calide46a1cba51", MAC:"3e:7f:99:05:28:22", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:29:06.461444 containerd[1486]: 2026-04-17 23:29:06.452 [INFO][5046] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="25604b2039f46c34bcd40f10dd36e6856eb70d21ba07b2c055611a28ebcf00dc" Namespace="calico-system" Pod="calico-kube-controllers-6cfcc84fc9-h7qh6" WorkloadEndpoint="ci--4081--3--6--n--6417c65d59-k8s-calico--kube--controllers--6cfcc84fc9--h7qh6-eth0" Apr 17 23:29:06.488525 systemd-networkd[1386]: cali04e7ecec06f: Gained IPv6LL Apr 17 23:29:06.501748 containerd[1486]: time="2026-04-17T23:29:06.500790532Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:29:06.501748 containerd[1486]: time="2026-04-17T23:29:06.500916053Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:29:06.501748 containerd[1486]: time="2026-04-17T23:29:06.500928733Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:29:06.501748 containerd[1486]: time="2026-04-17T23:29:06.501017614Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:29:06.562098 systemd[1]: Started cri-containerd-25604b2039f46c34bcd40f10dd36e6856eb70d21ba07b2c055611a28ebcf00dc.scope - libcontainer container 25604b2039f46c34bcd40f10dd36e6856eb70d21ba07b2c055611a28ebcf00dc. Apr 17 23:29:06.628100 containerd[1486]: time="2026-04-17T23:29:06.628055056Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6cfcc84fc9-h7qh6,Uid:48917d2d-23fd-499e-a8bf-6a46380b5f6d,Namespace:calico-system,Attempt:1,} returns sandbox id \"25604b2039f46c34bcd40f10dd36e6856eb70d21ba07b2c055611a28ebcf00dc\"" Apr 17 23:29:06.935206 systemd-networkd[1386]: cali1468a549822: Gained IPv6LL Apr 17 23:29:08.151195 systemd-networkd[1386]: calide46a1cba51: Gained IPv6LL Apr 17 23:29:08.756888 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2379593814.mount: Deactivated successfully. Apr 17 23:29:09.106424 containerd[1486]: time="2026-04-17T23:29:09.106363005Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:29:09.109276 containerd[1486]: time="2026-04-17T23:29:09.109204985Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.31.4: active requests=0, bytes read=51613980" Apr 17 23:29:09.110684 containerd[1486]: time="2026-04-17T23:29:09.110612555Z" level=info msg="ImageCreate event name:\"sha256:5274e98e9b12badfa0d6f106814630212e6de7abb8deaf896423b13e6ebdb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:29:09.118892 containerd[1486]: time="2026-04-17T23:29:09.116164234Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:29:09.119169 containerd[1486]: time="2026-04-17T23:29:09.117314482Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" with image id \"sha256:5274e98e9b12badfa0d6f106814630212e6de7abb8deaf896423b13e6ebdb41b\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\", size \"51613826\" in 3.402510168s" Apr 17 23:29:09.119261 containerd[1486]: time="2026-04-17T23:29:09.119243096Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" returns image reference \"sha256:5274e98e9b12badfa0d6f106814630212e6de7abb8deaf896423b13e6ebdb41b\"" Apr 17 23:29:09.123139 containerd[1486]: time="2026-04-17T23:29:09.123093323Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\"" Apr 17 23:29:09.126651 containerd[1486]: time="2026-04-17T23:29:09.125953983Z" level=info msg="CreateContainer within sandbox \"228a512fe399734ac918e2cc3086d9115ff5ce555ad3e563cd19000c981b44e9\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Apr 17 23:29:09.143978 containerd[1486]: time="2026-04-17T23:29:09.143913029Z" level=info msg="CreateContainer within sandbox \"228a512fe399734ac918e2cc3086d9115ff5ce555ad3e563cd19000c981b44e9\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"b09c8e385b17dcb94f1035f83dc6e276d6d70e4eb0fb0e874b4ce5ae4ac242fa\"" Apr 17 23:29:09.145725 containerd[1486]: time="2026-04-17T23:29:09.145152158Z" level=info msg="StartContainer for \"b09c8e385b17dcb94f1035f83dc6e276d6d70e4eb0fb0e874b4ce5ae4ac242fa\"" Apr 17 23:29:09.194181 systemd[1]: Started cri-containerd-b09c8e385b17dcb94f1035f83dc6e276d6d70e4eb0fb0e874b4ce5ae4ac242fa.scope - libcontainer container b09c8e385b17dcb94f1035f83dc6e276d6d70e4eb0fb0e874b4ce5ae4ac242fa. Apr 17 23:29:09.237775 containerd[1486]: time="2026-04-17T23:29:09.237716607Z" level=info msg="StartContainer for \"b09c8e385b17dcb94f1035f83dc6e276d6d70e4eb0fb0e874b4ce5ae4ac242fa\" returns successfully" Apr 17 23:29:09.407635 systemd[1]: run-containerd-runc-k8s.io-b09c8e385b17dcb94f1035f83dc6e276d6d70e4eb0fb0e874b4ce5ae4ac242fa-runc.TQZMci.mount: Deactivated successfully. Apr 17 23:29:09.484306 kubelet[2529]: I0417 23:29:09.482608 2529 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-cccfbd5cf-kds4v" podStartSLOduration=32.075990088 podStartE2EDuration="35.482588406s" podCreationTimestamp="2026-04-17 23:28:34 +0000 UTC" firstStartedPulling="2026-04-17 23:29:05.714476471 +0000 UTC m=+50.787154762" lastFinishedPulling="2026-04-17 23:29:09.121074789 +0000 UTC m=+54.193753080" observedRunningTime="2026-04-17 23:29:09.481949402 +0000 UTC m=+54.554627693" watchObservedRunningTime="2026-04-17 23:29:09.482588406 +0000 UTC m=+54.555266657" Apr 17 23:29:09.514160 systemd[1]: run-containerd-runc-k8s.io-b09c8e385b17dcb94f1035f83dc6e276d6d70e4eb0fb0e874b4ce5ae4ac242fa-runc.aHwli3.mount: Deactivated successfully. Apr 17 23:29:09.891465 systemd[1]: Started sshd@7-159.69.127.159:22-50.85.169.122:38968.service - OpenSSH per-connection server daemon (50.85.169.122:38968). Apr 17 23:29:10.031026 sshd[5215]: Accepted publickey for core from 50.85.169.122 port 38968 ssh2: RSA SHA256:VfypDX1RTsDok1DcKRgqFkknflSVDpDNB07R6ghJc68 Apr 17 23:29:10.032607 sshd[5215]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:29:10.040443 systemd-logind[1466]: New session 8 of user core. Apr 17 23:29:10.049668 systemd[1]: Started session-8.scope - Session 8 of User core. Apr 17 23:29:10.259588 sshd[5215]: pam_unix(sshd:session): session closed for user core Apr 17 23:29:10.264612 systemd[1]: sshd@7-159.69.127.159:22-50.85.169.122:38968.service: Deactivated successfully. Apr 17 23:29:10.269563 systemd[1]: session-8.scope: Deactivated successfully. Apr 17 23:29:10.272056 systemd-logind[1466]: Session 8 logged out. Waiting for processes to exit. Apr 17 23:29:10.275193 systemd-logind[1466]: Removed session 8. Apr 17 23:29:11.625602 containerd[1486]: time="2026-04-17T23:29:11.624446631Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:29:11.625602 containerd[1486]: time="2026-04-17T23:29:11.625552078Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.31.4: active requests=0, bytes read=49189955" Apr 17 23:29:11.626429 containerd[1486]: time="2026-04-17T23:29:11.626397363Z" level=info msg="ImageCreate event name:\"sha256:e80fe1ce4f06b0791c077492cd9d5ebf00125a02bbafdcd04d2a64e10cc4ad95\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:29:11.630156 containerd[1486]: time="2026-04-17T23:29:11.630110626Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:29:11.630991 containerd[1486]: time="2026-04-17T23:29:11.630955991Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" with image id \"sha256:e80fe1ce4f06b0791c077492cd9d5ebf00125a02bbafdcd04d2a64e10cc4ad95\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\", size \"50587448\" in 2.507812948s" Apr 17 23:29:11.631113 containerd[1486]: time="2026-04-17T23:29:11.631098032Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" returns image reference \"sha256:e80fe1ce4f06b0791c077492cd9d5ebf00125a02bbafdcd04d2a64e10cc4ad95\"" Apr 17 23:29:11.662498 containerd[1486]: time="2026-04-17T23:29:11.662445985Z" level=info msg="CreateContainer within sandbox \"25604b2039f46c34bcd40f10dd36e6856eb70d21ba07b2c055611a28ebcf00dc\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Apr 17 23:29:11.691264 containerd[1486]: time="2026-04-17T23:29:11.691210683Z" level=info msg="CreateContainer within sandbox \"25604b2039f46c34bcd40f10dd36e6856eb70d21ba07b2c055611a28ebcf00dc\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"67d3949d0e53b5eb7033d1057213a984070f14f668cc8f965d8e3a2b94cbad7c\"" Apr 17 23:29:11.694652 containerd[1486]: time="2026-04-17T23:29:11.692947813Z" level=info msg="StartContainer for \"67d3949d0e53b5eb7033d1057213a984070f14f668cc8f965d8e3a2b94cbad7c\"" Apr 17 23:29:11.736139 systemd[1]: Started cri-containerd-67d3949d0e53b5eb7033d1057213a984070f14f668cc8f965d8e3a2b94cbad7c.scope - libcontainer container 67d3949d0e53b5eb7033d1057213a984070f14f668cc8f965d8e3a2b94cbad7c. Apr 17 23:29:11.783746 containerd[1486]: time="2026-04-17T23:29:11.783690493Z" level=info msg="StartContainer for \"67d3949d0e53b5eb7033d1057213a984070f14f668cc8f965d8e3a2b94cbad7c\" returns successfully" Apr 17 23:29:12.547841 kubelet[2529]: I0417 23:29:12.547737 2529 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-6cfcc84fc9-h7qh6" podStartSLOduration=30.544723025 podStartE2EDuration="35.547718396s" podCreationTimestamp="2026-04-17 23:28:37 +0000 UTC" firstStartedPulling="2026-04-17 23:29:06.629784431 +0000 UTC m=+51.702462722" lastFinishedPulling="2026-04-17 23:29:11.632779802 +0000 UTC m=+56.705458093" observedRunningTime="2026-04-17 23:29:12.547643555 +0000 UTC m=+57.620321806" watchObservedRunningTime="2026-04-17 23:29:12.547718396 +0000 UTC m=+57.620396687" Apr 17 23:29:15.063232 containerd[1486]: time="2026-04-17T23:29:15.063023178Z" level=info msg="StopPodSandbox for \"9aab4a498f1be21cc6416e812e1a44a9bed0855cec481f73093f0b79f0d4a0f0\"" Apr 17 23:29:15.182778 containerd[1486]: 2026-04-17 23:29:15.118 [WARNING][5350] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9aab4a498f1be21cc6416e812e1a44a9bed0855cec481f73093f0b79f0d4a0f0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--6417c65d59-k8s-coredns--66bc5c9577--hjmd8-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"6916156d-2f5c-4714-9bd0-3f37fd1a0dc3", ResourceVersion:"1012", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 28, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-6417c65d59", ContainerID:"2bcc4cbec109180a705c13ad20bb586c6b67af9a7e508006b5e467b99f948afd", Pod:"coredns-66bc5c9577-hjmd8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.74.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3718be877bb", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:29:15.182778 containerd[1486]: 2026-04-17 23:29:15.119 [INFO][5350] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="9aab4a498f1be21cc6416e812e1a44a9bed0855cec481f73093f0b79f0d4a0f0" Apr 17 23:29:15.182778 containerd[1486]: 2026-04-17 23:29:15.119 [INFO][5350] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9aab4a498f1be21cc6416e812e1a44a9bed0855cec481f73093f0b79f0d4a0f0" iface="eth0" netns="" Apr 17 23:29:15.182778 containerd[1486]: 2026-04-17 23:29:15.119 [INFO][5350] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="9aab4a498f1be21cc6416e812e1a44a9bed0855cec481f73093f0b79f0d4a0f0" Apr 17 23:29:15.182778 containerd[1486]: 2026-04-17 23:29:15.119 [INFO][5350] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="9aab4a498f1be21cc6416e812e1a44a9bed0855cec481f73093f0b79f0d4a0f0" Apr 17 23:29:15.182778 containerd[1486]: 2026-04-17 23:29:15.158 [INFO][5359] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="9aab4a498f1be21cc6416e812e1a44a9bed0855cec481f73093f0b79f0d4a0f0" HandleID="k8s-pod-network.9aab4a498f1be21cc6416e812e1a44a9bed0855cec481f73093f0b79f0d4a0f0" Workload="ci--4081--3--6--n--6417c65d59-k8s-coredns--66bc5c9577--hjmd8-eth0" Apr 17 23:29:15.182778 containerd[1486]: 2026-04-17 23:29:15.158 [INFO][5359] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:29:15.182778 containerd[1486]: 2026-04-17 23:29:15.158 [INFO][5359] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:29:15.182778 containerd[1486]: 2026-04-17 23:29:15.173 [WARNING][5359] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="9aab4a498f1be21cc6416e812e1a44a9bed0855cec481f73093f0b79f0d4a0f0" HandleID="k8s-pod-network.9aab4a498f1be21cc6416e812e1a44a9bed0855cec481f73093f0b79f0d4a0f0" Workload="ci--4081--3--6--n--6417c65d59-k8s-coredns--66bc5c9577--hjmd8-eth0" Apr 17 23:29:15.182778 containerd[1486]: 2026-04-17 23:29:15.173 [INFO][5359] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="9aab4a498f1be21cc6416e812e1a44a9bed0855cec481f73093f0b79f0d4a0f0" HandleID="k8s-pod-network.9aab4a498f1be21cc6416e812e1a44a9bed0855cec481f73093f0b79f0d4a0f0" Workload="ci--4081--3--6--n--6417c65d59-k8s-coredns--66bc5c9577--hjmd8-eth0" Apr 17 23:29:15.182778 containerd[1486]: 2026-04-17 23:29:15.179 [INFO][5359] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:29:15.182778 containerd[1486]: 2026-04-17 23:29:15.180 [INFO][5350] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="9aab4a498f1be21cc6416e812e1a44a9bed0855cec481f73093f0b79f0d4a0f0" Apr 17 23:29:15.183583 containerd[1486]: time="2026-04-17T23:29:15.182838789Z" level=info msg="TearDown network for sandbox \"9aab4a498f1be21cc6416e812e1a44a9bed0855cec481f73093f0b79f0d4a0f0\" successfully" Apr 17 23:29:15.183583 containerd[1486]: time="2026-04-17T23:29:15.183009070Z" level=info msg="StopPodSandbox for \"9aab4a498f1be21cc6416e812e1a44a9bed0855cec481f73093f0b79f0d4a0f0\" returns successfully" Apr 17 23:29:15.183784 containerd[1486]: time="2026-04-17T23:29:15.183754233Z" level=info msg="RemovePodSandbox for \"9aab4a498f1be21cc6416e812e1a44a9bed0855cec481f73093f0b79f0d4a0f0\"" Apr 17 23:29:15.183817 containerd[1486]: time="2026-04-17T23:29:15.183793353Z" level=info msg="Forcibly stopping sandbox \"9aab4a498f1be21cc6416e812e1a44a9bed0855cec481f73093f0b79f0d4a0f0\"" Apr 17 23:29:15.280180 containerd[1486]: 2026-04-17 23:29:15.224 [WARNING][5373] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9aab4a498f1be21cc6416e812e1a44a9bed0855cec481f73093f0b79f0d4a0f0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--6417c65d59-k8s-coredns--66bc5c9577--hjmd8-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"6916156d-2f5c-4714-9bd0-3f37fd1a0dc3", ResourceVersion:"1012", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 28, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-6417c65d59", ContainerID:"2bcc4cbec109180a705c13ad20bb586c6b67af9a7e508006b5e467b99f948afd", Pod:"coredns-66bc5c9577-hjmd8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.74.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3718be877bb", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:29:15.280180 containerd[1486]: 2026-04-17 23:29:15.225 [INFO][5373] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="9aab4a498f1be21cc6416e812e1a44a9bed0855cec481f73093f0b79f0d4a0f0" Apr 17 23:29:15.280180 containerd[1486]: 2026-04-17 23:29:15.225 [INFO][5373] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9aab4a498f1be21cc6416e812e1a44a9bed0855cec481f73093f0b79f0d4a0f0" iface="eth0" netns="" Apr 17 23:29:15.280180 containerd[1486]: 2026-04-17 23:29:15.225 [INFO][5373] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="9aab4a498f1be21cc6416e812e1a44a9bed0855cec481f73093f0b79f0d4a0f0" Apr 17 23:29:15.280180 containerd[1486]: 2026-04-17 23:29:15.225 [INFO][5373] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="9aab4a498f1be21cc6416e812e1a44a9bed0855cec481f73093f0b79f0d4a0f0" Apr 17 23:29:15.280180 containerd[1486]: 2026-04-17 23:29:15.256 [INFO][5380] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="9aab4a498f1be21cc6416e812e1a44a9bed0855cec481f73093f0b79f0d4a0f0" HandleID="k8s-pod-network.9aab4a498f1be21cc6416e812e1a44a9bed0855cec481f73093f0b79f0d4a0f0" Workload="ci--4081--3--6--n--6417c65d59-k8s-coredns--66bc5c9577--hjmd8-eth0" Apr 17 23:29:15.280180 containerd[1486]: 2026-04-17 23:29:15.256 [INFO][5380] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:29:15.280180 containerd[1486]: 2026-04-17 23:29:15.256 [INFO][5380] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:29:15.280180 containerd[1486]: 2026-04-17 23:29:15.271 [WARNING][5380] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="9aab4a498f1be21cc6416e812e1a44a9bed0855cec481f73093f0b79f0d4a0f0" HandleID="k8s-pod-network.9aab4a498f1be21cc6416e812e1a44a9bed0855cec481f73093f0b79f0d4a0f0" Workload="ci--4081--3--6--n--6417c65d59-k8s-coredns--66bc5c9577--hjmd8-eth0" Apr 17 23:29:15.280180 containerd[1486]: 2026-04-17 23:29:15.271 [INFO][5380] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="9aab4a498f1be21cc6416e812e1a44a9bed0855cec481f73093f0b79f0d4a0f0" HandleID="k8s-pod-network.9aab4a498f1be21cc6416e812e1a44a9bed0855cec481f73093f0b79f0d4a0f0" Workload="ci--4081--3--6--n--6417c65d59-k8s-coredns--66bc5c9577--hjmd8-eth0" Apr 17 23:29:15.280180 containerd[1486]: 2026-04-17 23:29:15.274 [INFO][5380] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:29:15.280180 containerd[1486]: 2026-04-17 23:29:15.277 [INFO][5373] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="9aab4a498f1be21cc6416e812e1a44a9bed0855cec481f73093f0b79f0d4a0f0" Apr 17 23:29:15.281304 containerd[1486]: time="2026-04-17T23:29:15.281122417Z" level=info msg="TearDown network for sandbox \"9aab4a498f1be21cc6416e812e1a44a9bed0855cec481f73093f0b79f0d4a0f0\" successfully" Apr 17 23:29:15.294883 systemd[1]: Started sshd@8-159.69.127.159:22-50.85.169.122:38984.service - OpenSSH per-connection server daemon (50.85.169.122:38984). Apr 17 23:29:15.300787 containerd[1486]: time="2026-04-17T23:29:15.300744671Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9aab4a498f1be21cc6416e812e1a44a9bed0855cec481f73093f0b79f0d4a0f0\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 17 23:29:15.301219 containerd[1486]: time="2026-04-17T23:29:15.301050752Z" level=info msg="RemovePodSandbox \"9aab4a498f1be21cc6416e812e1a44a9bed0855cec481f73093f0b79f0d4a0f0\" returns successfully" Apr 17 23:29:15.302041 containerd[1486]: time="2026-04-17T23:29:15.301974756Z" level=info msg="StopPodSandbox for \"0505aa1b617d0a50122e35b64b7ef4bfa35ffa96bfb8f767a87a3b97825b771a\"" Apr 17 23:29:15.401682 containerd[1486]: 2026-04-17 23:29:15.346 [WARNING][5395] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0505aa1b617d0a50122e35b64b7ef4bfa35ffa96bfb8f767a87a3b97825b771a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--6417c65d59-k8s-calico--kube--controllers--6cfcc84fc9--h7qh6-eth0", GenerateName:"calico-kube-controllers-6cfcc84fc9-", Namespace:"calico-system", SelfLink:"", UID:"48917d2d-23fd-499e-a8bf-6a46380b5f6d", ResourceVersion:"1088", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 28, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6cfcc84fc9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-6417c65d59", ContainerID:"25604b2039f46c34bcd40f10dd36e6856eb70d21ba07b2c055611a28ebcf00dc", Pod:"calico-kube-controllers-6cfcc84fc9-h7qh6", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.74.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calide46a1cba51", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:29:15.401682 containerd[1486]: 2026-04-17 23:29:15.348 [INFO][5395] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="0505aa1b617d0a50122e35b64b7ef4bfa35ffa96bfb8f767a87a3b97825b771a" Apr 17 23:29:15.401682 containerd[1486]: 2026-04-17 23:29:15.348 [INFO][5395] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0505aa1b617d0a50122e35b64b7ef4bfa35ffa96bfb8f767a87a3b97825b771a" iface="eth0" netns="" Apr 17 23:29:15.401682 containerd[1486]: 2026-04-17 23:29:15.348 [INFO][5395] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="0505aa1b617d0a50122e35b64b7ef4bfa35ffa96bfb8f767a87a3b97825b771a" Apr 17 23:29:15.401682 containerd[1486]: 2026-04-17 23:29:15.348 [INFO][5395] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="0505aa1b617d0a50122e35b64b7ef4bfa35ffa96bfb8f767a87a3b97825b771a" Apr 17 23:29:15.401682 containerd[1486]: 2026-04-17 23:29:15.383 [INFO][5403] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="0505aa1b617d0a50122e35b64b7ef4bfa35ffa96bfb8f767a87a3b97825b771a" HandleID="k8s-pod-network.0505aa1b617d0a50122e35b64b7ef4bfa35ffa96bfb8f767a87a3b97825b771a" Workload="ci--4081--3--6--n--6417c65d59-k8s-calico--kube--controllers--6cfcc84fc9--h7qh6-eth0" Apr 17 23:29:15.401682 containerd[1486]: 2026-04-17 23:29:15.383 [INFO][5403] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:29:15.401682 containerd[1486]: 2026-04-17 23:29:15.383 [INFO][5403] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:29:15.401682 containerd[1486]: 2026-04-17 23:29:15.394 [WARNING][5403] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="0505aa1b617d0a50122e35b64b7ef4bfa35ffa96bfb8f767a87a3b97825b771a" HandleID="k8s-pod-network.0505aa1b617d0a50122e35b64b7ef4bfa35ffa96bfb8f767a87a3b97825b771a" Workload="ci--4081--3--6--n--6417c65d59-k8s-calico--kube--controllers--6cfcc84fc9--h7qh6-eth0" Apr 17 23:29:15.401682 containerd[1486]: 2026-04-17 23:29:15.394 [INFO][5403] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="0505aa1b617d0a50122e35b64b7ef4bfa35ffa96bfb8f767a87a3b97825b771a" HandleID="k8s-pod-network.0505aa1b617d0a50122e35b64b7ef4bfa35ffa96bfb8f767a87a3b97825b771a" Workload="ci--4081--3--6--n--6417c65d59-k8s-calico--kube--controllers--6cfcc84fc9--h7qh6-eth0" Apr 17 23:29:15.401682 containerd[1486]: 2026-04-17 23:29:15.398 [INFO][5403] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:29:15.401682 containerd[1486]: 2026-04-17 23:29:15.400 [INFO][5395] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="0505aa1b617d0a50122e35b64b7ef4bfa35ffa96bfb8f767a87a3b97825b771a" Apr 17 23:29:15.402407 containerd[1486]: time="2026-04-17T23:29:15.401987953Z" level=info msg="TearDown network for sandbox \"0505aa1b617d0a50122e35b64b7ef4bfa35ffa96bfb8f767a87a3b97825b771a\" successfully" Apr 17 23:29:15.402407 containerd[1486]: time="2026-04-17T23:29:15.402017593Z" level=info msg="StopPodSandbox for \"0505aa1b617d0a50122e35b64b7ef4bfa35ffa96bfb8f767a87a3b97825b771a\" returns successfully" Apr 17 23:29:15.403312 containerd[1486]: time="2026-04-17T23:29:15.402956678Z" level=info msg="RemovePodSandbox for \"0505aa1b617d0a50122e35b64b7ef4bfa35ffa96bfb8f767a87a3b97825b771a\"" Apr 17 23:29:15.403312 containerd[1486]: time="2026-04-17T23:29:15.402990118Z" level=info msg="Forcibly stopping sandbox \"0505aa1b617d0a50122e35b64b7ef4bfa35ffa96bfb8f767a87a3b97825b771a\"" Apr 17 23:29:15.423856 sshd[5387]: Accepted publickey for core from 50.85.169.122 port 38984 ssh2: RSA SHA256:VfypDX1RTsDok1DcKRgqFkknflSVDpDNB07R6ghJc68 Apr 17 23:29:15.426080 sshd[5387]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:29:15.435385 systemd-logind[1466]: New session 9 of user core. Apr 17 23:29:15.441187 systemd[1]: Started session-9.scope - Session 9 of User core. Apr 17 23:29:15.510884 containerd[1486]: 2026-04-17 23:29:15.465 [WARNING][5417] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0505aa1b617d0a50122e35b64b7ef4bfa35ffa96bfb8f767a87a3b97825b771a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--6417c65d59-k8s-calico--kube--controllers--6cfcc84fc9--h7qh6-eth0", GenerateName:"calico-kube-controllers-6cfcc84fc9-", Namespace:"calico-system", SelfLink:"", UID:"48917d2d-23fd-499e-a8bf-6a46380b5f6d", ResourceVersion:"1088", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 28, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6cfcc84fc9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-6417c65d59", ContainerID:"25604b2039f46c34bcd40f10dd36e6856eb70d21ba07b2c055611a28ebcf00dc", Pod:"calico-kube-controllers-6cfcc84fc9-h7qh6", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.74.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calide46a1cba51", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:29:15.510884 containerd[1486]: 2026-04-17 23:29:15.465 [INFO][5417] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="0505aa1b617d0a50122e35b64b7ef4bfa35ffa96bfb8f767a87a3b97825b771a" Apr 17 23:29:15.510884 containerd[1486]: 2026-04-17 23:29:15.465 [INFO][5417] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0505aa1b617d0a50122e35b64b7ef4bfa35ffa96bfb8f767a87a3b97825b771a" iface="eth0" netns="" Apr 17 23:29:15.510884 containerd[1486]: 2026-04-17 23:29:15.465 [INFO][5417] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="0505aa1b617d0a50122e35b64b7ef4bfa35ffa96bfb8f767a87a3b97825b771a" Apr 17 23:29:15.510884 containerd[1486]: 2026-04-17 23:29:15.465 [INFO][5417] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="0505aa1b617d0a50122e35b64b7ef4bfa35ffa96bfb8f767a87a3b97825b771a" Apr 17 23:29:15.510884 containerd[1486]: 2026-04-17 23:29:15.488 [INFO][5425] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="0505aa1b617d0a50122e35b64b7ef4bfa35ffa96bfb8f767a87a3b97825b771a" HandleID="k8s-pod-network.0505aa1b617d0a50122e35b64b7ef4bfa35ffa96bfb8f767a87a3b97825b771a" Workload="ci--4081--3--6--n--6417c65d59-k8s-calico--kube--controllers--6cfcc84fc9--h7qh6-eth0" Apr 17 23:29:15.510884 containerd[1486]: 2026-04-17 23:29:15.488 [INFO][5425] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:29:15.510884 containerd[1486]: 2026-04-17 23:29:15.488 [INFO][5425] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:29:15.510884 containerd[1486]: 2026-04-17 23:29:15.500 [WARNING][5425] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="0505aa1b617d0a50122e35b64b7ef4bfa35ffa96bfb8f767a87a3b97825b771a" HandleID="k8s-pod-network.0505aa1b617d0a50122e35b64b7ef4bfa35ffa96bfb8f767a87a3b97825b771a" Workload="ci--4081--3--6--n--6417c65d59-k8s-calico--kube--controllers--6cfcc84fc9--h7qh6-eth0" Apr 17 23:29:15.510884 containerd[1486]: 2026-04-17 23:29:15.501 [INFO][5425] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="0505aa1b617d0a50122e35b64b7ef4bfa35ffa96bfb8f767a87a3b97825b771a" HandleID="k8s-pod-network.0505aa1b617d0a50122e35b64b7ef4bfa35ffa96bfb8f767a87a3b97825b771a" Workload="ci--4081--3--6--n--6417c65d59-k8s-calico--kube--controllers--6cfcc84fc9--h7qh6-eth0" Apr 17 23:29:15.510884 containerd[1486]: 2026-04-17 23:29:15.505 [INFO][5425] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:29:15.510884 containerd[1486]: 2026-04-17 23:29:15.509 [INFO][5417] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="0505aa1b617d0a50122e35b64b7ef4bfa35ffa96bfb8f767a87a3b97825b771a" Apr 17 23:29:15.511319 containerd[1486]: time="2026-04-17T23:29:15.510963392Z" level=info msg="TearDown network for sandbox \"0505aa1b617d0a50122e35b64b7ef4bfa35ffa96bfb8f767a87a3b97825b771a\" successfully" Apr 17 23:29:15.515264 containerd[1486]: time="2026-04-17T23:29:15.515191733Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0505aa1b617d0a50122e35b64b7ef4bfa35ffa96bfb8f767a87a3b97825b771a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 17 23:29:15.515758 containerd[1486]: time="2026-04-17T23:29:15.515285733Z" level=info msg="RemovePodSandbox \"0505aa1b617d0a50122e35b64b7ef4bfa35ffa96bfb8f767a87a3b97825b771a\" returns successfully" Apr 17 23:29:15.516747 containerd[1486]: time="2026-04-17T23:29:15.516306658Z" level=info msg="StopPodSandbox for \"87f9d005bc40d4ccb69ca40d4a5906ccd13ade6b13dc2a2a5a3d30d36a11221b\"" Apr 17 23:29:15.634617 containerd[1486]: 2026-04-17 23:29:15.571 [WARNING][5441] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="87f9d005bc40d4ccb69ca40d4a5906ccd13ade6b13dc2a2a5a3d30d36a11221b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--6417c65d59-k8s-calico--apiserver--5865bd758--vdph8-eth0", GenerateName:"calico-apiserver-5865bd758-", Namespace:"calico-system", SelfLink:"", UID:"9fdd0055-33f6-4661-b37f-fdeacc1cf39d", ResourceVersion:"917", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 28, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5865bd758", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-6417c65d59", ContainerID:"360208d39cb80038dc2837aa4d703650e327c8b018f5d52ca09011485b01c5ab", Pod:"calico-apiserver-5865bd758-vdph8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.74.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali305ec1431ce", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:29:15.634617 containerd[1486]: 2026-04-17 23:29:15.572 [INFO][5441] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="87f9d005bc40d4ccb69ca40d4a5906ccd13ade6b13dc2a2a5a3d30d36a11221b" Apr 17 23:29:15.634617 containerd[1486]: 2026-04-17 23:29:15.572 [INFO][5441] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="87f9d005bc40d4ccb69ca40d4a5906ccd13ade6b13dc2a2a5a3d30d36a11221b" iface="eth0" netns="" Apr 17 23:29:15.634617 containerd[1486]: 2026-04-17 23:29:15.572 [INFO][5441] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="87f9d005bc40d4ccb69ca40d4a5906ccd13ade6b13dc2a2a5a3d30d36a11221b" Apr 17 23:29:15.634617 containerd[1486]: 2026-04-17 23:29:15.572 [INFO][5441] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="87f9d005bc40d4ccb69ca40d4a5906ccd13ade6b13dc2a2a5a3d30d36a11221b" Apr 17 23:29:15.634617 containerd[1486]: 2026-04-17 23:29:15.602 [INFO][5454] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="87f9d005bc40d4ccb69ca40d4a5906ccd13ade6b13dc2a2a5a3d30d36a11221b" HandleID="k8s-pod-network.87f9d005bc40d4ccb69ca40d4a5906ccd13ade6b13dc2a2a5a3d30d36a11221b" Workload="ci--4081--3--6--n--6417c65d59-k8s-calico--apiserver--5865bd758--vdph8-eth0" Apr 17 23:29:15.634617 containerd[1486]: 2026-04-17 23:29:15.603 [INFO][5454] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:29:15.634617 containerd[1486]: 2026-04-17 23:29:15.603 [INFO][5454] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:29:15.634617 containerd[1486]: 2026-04-17 23:29:15.624 [WARNING][5454] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="87f9d005bc40d4ccb69ca40d4a5906ccd13ade6b13dc2a2a5a3d30d36a11221b" HandleID="k8s-pod-network.87f9d005bc40d4ccb69ca40d4a5906ccd13ade6b13dc2a2a5a3d30d36a11221b" Workload="ci--4081--3--6--n--6417c65d59-k8s-calico--apiserver--5865bd758--vdph8-eth0" Apr 17 23:29:15.634617 containerd[1486]: 2026-04-17 23:29:15.624 [INFO][5454] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="87f9d005bc40d4ccb69ca40d4a5906ccd13ade6b13dc2a2a5a3d30d36a11221b" HandleID="k8s-pod-network.87f9d005bc40d4ccb69ca40d4a5906ccd13ade6b13dc2a2a5a3d30d36a11221b" Workload="ci--4081--3--6--n--6417c65d59-k8s-calico--apiserver--5865bd758--vdph8-eth0" Apr 17 23:29:15.634617 containerd[1486]: 2026-04-17 23:29:15.627 [INFO][5454] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:29:15.634617 containerd[1486]: 2026-04-17 23:29:15.631 [INFO][5441] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="87f9d005bc40d4ccb69ca40d4a5906ccd13ade6b13dc2a2a5a3d30d36a11221b" Apr 17 23:29:15.635404 containerd[1486]: time="2026-04-17T23:29:15.635248065Z" level=info msg="TearDown network for sandbox \"87f9d005bc40d4ccb69ca40d4a5906ccd13ade6b13dc2a2a5a3d30d36a11221b\" successfully" Apr 17 23:29:15.635404 containerd[1486]: time="2026-04-17T23:29:15.635292585Z" level=info msg="StopPodSandbox for \"87f9d005bc40d4ccb69ca40d4a5906ccd13ade6b13dc2a2a5a3d30d36a11221b\" returns successfully" Apr 17 23:29:15.636076 containerd[1486]: time="2026-04-17T23:29:15.636047428Z" level=info msg="RemovePodSandbox for \"87f9d005bc40d4ccb69ca40d4a5906ccd13ade6b13dc2a2a5a3d30d36a11221b\"" Apr 17 23:29:15.636153 containerd[1486]: time="2026-04-17T23:29:15.636087509Z" level=info msg="Forcibly stopping sandbox \"87f9d005bc40d4ccb69ca40d4a5906ccd13ade6b13dc2a2a5a3d30d36a11221b\"" Apr 17 23:29:15.639913 sshd[5387]: pam_unix(sshd:session): session closed for user core Apr 17 23:29:15.650853 systemd[1]: sshd@8-159.69.127.159:22-50.85.169.122:38984.service: Deactivated successfully. Apr 17 23:29:15.653729 systemd[1]: session-9.scope: Deactivated successfully. Apr 17 23:29:15.656958 systemd-logind[1466]: Session 9 logged out. Waiting for processes to exit. Apr 17 23:29:15.658890 systemd-logind[1466]: Removed session 9. Apr 17 23:29:15.744696 containerd[1486]: 2026-04-17 23:29:15.698 [WARNING][5470] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="87f9d005bc40d4ccb69ca40d4a5906ccd13ade6b13dc2a2a5a3d30d36a11221b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--6417c65d59-k8s-calico--apiserver--5865bd758--vdph8-eth0", GenerateName:"calico-apiserver-5865bd758-", Namespace:"calico-system", SelfLink:"", UID:"9fdd0055-33f6-4661-b37f-fdeacc1cf39d", ResourceVersion:"917", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 28, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5865bd758", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-6417c65d59", ContainerID:"360208d39cb80038dc2837aa4d703650e327c8b018f5d52ca09011485b01c5ab", Pod:"calico-apiserver-5865bd758-vdph8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.74.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali305ec1431ce", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:29:15.744696 containerd[1486]: 2026-04-17 23:29:15.698 [INFO][5470] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="87f9d005bc40d4ccb69ca40d4a5906ccd13ade6b13dc2a2a5a3d30d36a11221b" Apr 17 23:29:15.744696 containerd[1486]: 2026-04-17 23:29:15.698 [INFO][5470] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="87f9d005bc40d4ccb69ca40d4a5906ccd13ade6b13dc2a2a5a3d30d36a11221b" iface="eth0" netns="" Apr 17 23:29:15.744696 containerd[1486]: 2026-04-17 23:29:15.698 [INFO][5470] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="87f9d005bc40d4ccb69ca40d4a5906ccd13ade6b13dc2a2a5a3d30d36a11221b" Apr 17 23:29:15.744696 containerd[1486]: 2026-04-17 23:29:15.698 [INFO][5470] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="87f9d005bc40d4ccb69ca40d4a5906ccd13ade6b13dc2a2a5a3d30d36a11221b" Apr 17 23:29:15.744696 containerd[1486]: 2026-04-17 23:29:15.722 [INFO][5480] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="87f9d005bc40d4ccb69ca40d4a5906ccd13ade6b13dc2a2a5a3d30d36a11221b" HandleID="k8s-pod-network.87f9d005bc40d4ccb69ca40d4a5906ccd13ade6b13dc2a2a5a3d30d36a11221b" Workload="ci--4081--3--6--n--6417c65d59-k8s-calico--apiserver--5865bd758--vdph8-eth0" Apr 17 23:29:15.744696 containerd[1486]: 2026-04-17 23:29:15.722 [INFO][5480] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:29:15.744696 containerd[1486]: 2026-04-17 23:29:15.722 [INFO][5480] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:29:15.744696 containerd[1486]: 2026-04-17 23:29:15.736 [WARNING][5480] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="87f9d005bc40d4ccb69ca40d4a5906ccd13ade6b13dc2a2a5a3d30d36a11221b" HandleID="k8s-pod-network.87f9d005bc40d4ccb69ca40d4a5906ccd13ade6b13dc2a2a5a3d30d36a11221b" Workload="ci--4081--3--6--n--6417c65d59-k8s-calico--apiserver--5865bd758--vdph8-eth0" Apr 17 23:29:15.744696 containerd[1486]: 2026-04-17 23:29:15.736 [INFO][5480] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="87f9d005bc40d4ccb69ca40d4a5906ccd13ade6b13dc2a2a5a3d30d36a11221b" HandleID="k8s-pod-network.87f9d005bc40d4ccb69ca40d4a5906ccd13ade6b13dc2a2a5a3d30d36a11221b" Workload="ci--4081--3--6--n--6417c65d59-k8s-calico--apiserver--5865bd758--vdph8-eth0" Apr 17 23:29:15.744696 containerd[1486]: 2026-04-17 23:29:15.739 [INFO][5480] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:29:15.744696 containerd[1486]: 2026-04-17 23:29:15.742 [INFO][5470] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="87f9d005bc40d4ccb69ca40d4a5906ccd13ade6b13dc2a2a5a3d30d36a11221b" Apr 17 23:29:15.745377 containerd[1486]: time="2026-04-17T23:29:15.744742266Z" level=info msg="TearDown network for sandbox \"87f9d005bc40d4ccb69ca40d4a5906ccd13ade6b13dc2a2a5a3d30d36a11221b\" successfully" Apr 17 23:29:15.749178 containerd[1486]: time="2026-04-17T23:29:15.749091967Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"87f9d005bc40d4ccb69ca40d4a5906ccd13ade6b13dc2a2a5a3d30d36a11221b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 17 23:29:15.749281 containerd[1486]: time="2026-04-17T23:29:15.749250648Z" level=info msg="RemovePodSandbox \"87f9d005bc40d4ccb69ca40d4a5906ccd13ade6b13dc2a2a5a3d30d36a11221b\" returns successfully" Apr 17 23:29:15.749761 containerd[1486]: time="2026-04-17T23:29:15.749737490Z" level=info msg="StopPodSandbox for \"b10bb5f75dd77044fda6ea9e99473b060ca7c711666de3188966609b987893e0\"" Apr 17 23:29:15.843077 containerd[1486]: 2026-04-17 23:29:15.794 [WARNING][5495] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b10bb5f75dd77044fda6ea9e99473b060ca7c711666de3188966609b987893e0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--6417c65d59-k8s-csi--node--driver--zxqb4-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d8d9fd23-5b21-4171-9475-81c4da566eda", ResourceVersion:"948", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 28, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"98cbb5577", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-6417c65d59", ContainerID:"ed0cb06a0ec70ba450e1717e34ff450d7e010308a87d293146e0db712233fda6", Pod:"csi-node-driver-zxqb4", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.74.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calie8a02487be3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:29:15.843077 containerd[1486]: 2026-04-17 23:29:15.795 [INFO][5495] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="b10bb5f75dd77044fda6ea9e99473b060ca7c711666de3188966609b987893e0" Apr 17 23:29:15.843077 containerd[1486]: 2026-04-17 23:29:15.795 [INFO][5495] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b10bb5f75dd77044fda6ea9e99473b060ca7c711666de3188966609b987893e0" iface="eth0" netns="" Apr 17 23:29:15.843077 containerd[1486]: 2026-04-17 23:29:15.795 [INFO][5495] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="b10bb5f75dd77044fda6ea9e99473b060ca7c711666de3188966609b987893e0" Apr 17 23:29:15.843077 containerd[1486]: 2026-04-17 23:29:15.795 [INFO][5495] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="b10bb5f75dd77044fda6ea9e99473b060ca7c711666de3188966609b987893e0" Apr 17 23:29:15.843077 containerd[1486]: 2026-04-17 23:29:15.817 [INFO][5502] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="b10bb5f75dd77044fda6ea9e99473b060ca7c711666de3188966609b987893e0" HandleID="k8s-pod-network.b10bb5f75dd77044fda6ea9e99473b060ca7c711666de3188966609b987893e0" Workload="ci--4081--3--6--n--6417c65d59-k8s-csi--node--driver--zxqb4-eth0" Apr 17 23:29:15.843077 containerd[1486]: 2026-04-17 23:29:15.817 [INFO][5502] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:29:15.843077 containerd[1486]: 2026-04-17 23:29:15.817 [INFO][5502] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:29:15.843077 containerd[1486]: 2026-04-17 23:29:15.837 [WARNING][5502] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="b10bb5f75dd77044fda6ea9e99473b060ca7c711666de3188966609b987893e0" HandleID="k8s-pod-network.b10bb5f75dd77044fda6ea9e99473b060ca7c711666de3188966609b987893e0" Workload="ci--4081--3--6--n--6417c65d59-k8s-csi--node--driver--zxqb4-eth0" Apr 17 23:29:15.843077 containerd[1486]: 2026-04-17 23:29:15.837 [INFO][5502] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="b10bb5f75dd77044fda6ea9e99473b060ca7c711666de3188966609b987893e0" HandleID="k8s-pod-network.b10bb5f75dd77044fda6ea9e99473b060ca7c711666de3188966609b987893e0" Workload="ci--4081--3--6--n--6417c65d59-k8s-csi--node--driver--zxqb4-eth0" Apr 17 23:29:15.843077 containerd[1486]: 2026-04-17 23:29:15.839 [INFO][5502] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:29:15.843077 containerd[1486]: 2026-04-17 23:29:15.841 [INFO][5495] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="b10bb5f75dd77044fda6ea9e99473b060ca7c711666de3188966609b987893e0" Apr 17 23:29:15.843697 containerd[1486]: time="2026-04-17T23:29:15.843121855Z" level=info msg="TearDown network for sandbox \"b10bb5f75dd77044fda6ea9e99473b060ca7c711666de3188966609b987893e0\" successfully" Apr 17 23:29:15.843697 containerd[1486]: time="2026-04-17T23:29:15.843146855Z" level=info msg="StopPodSandbox for \"b10bb5f75dd77044fda6ea9e99473b060ca7c711666de3188966609b987893e0\" returns successfully" Apr 17 23:29:15.844266 containerd[1486]: time="2026-04-17T23:29:15.843812339Z" level=info msg="RemovePodSandbox for \"b10bb5f75dd77044fda6ea9e99473b060ca7c711666de3188966609b987893e0\"" Apr 17 23:29:15.844266 containerd[1486]: time="2026-04-17T23:29:15.843995619Z" level=info msg="Forcibly stopping sandbox \"b10bb5f75dd77044fda6ea9e99473b060ca7c711666de3188966609b987893e0\"" Apr 17 23:29:15.926624 containerd[1486]: 2026-04-17 23:29:15.886 [WARNING][5516] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b10bb5f75dd77044fda6ea9e99473b060ca7c711666de3188966609b987893e0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--6417c65d59-k8s-csi--node--driver--zxqb4-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d8d9fd23-5b21-4171-9475-81c4da566eda", ResourceVersion:"948", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 28, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"98cbb5577", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-6417c65d59", ContainerID:"ed0cb06a0ec70ba450e1717e34ff450d7e010308a87d293146e0db712233fda6", Pod:"csi-node-driver-zxqb4", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.74.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calie8a02487be3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:29:15.926624 containerd[1486]: 2026-04-17 23:29:15.886 [INFO][5516] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="b10bb5f75dd77044fda6ea9e99473b060ca7c711666de3188966609b987893e0" Apr 17 23:29:15.926624 containerd[1486]: 2026-04-17 23:29:15.886 [INFO][5516] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b10bb5f75dd77044fda6ea9e99473b060ca7c711666de3188966609b987893e0" iface="eth0" netns="" Apr 17 23:29:15.926624 containerd[1486]: 2026-04-17 23:29:15.886 [INFO][5516] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="b10bb5f75dd77044fda6ea9e99473b060ca7c711666de3188966609b987893e0" Apr 17 23:29:15.926624 containerd[1486]: 2026-04-17 23:29:15.886 [INFO][5516] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="b10bb5f75dd77044fda6ea9e99473b060ca7c711666de3188966609b987893e0" Apr 17 23:29:15.926624 containerd[1486]: 2026-04-17 23:29:15.907 [INFO][5523] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="b10bb5f75dd77044fda6ea9e99473b060ca7c711666de3188966609b987893e0" HandleID="k8s-pod-network.b10bb5f75dd77044fda6ea9e99473b060ca7c711666de3188966609b987893e0" Workload="ci--4081--3--6--n--6417c65d59-k8s-csi--node--driver--zxqb4-eth0" Apr 17 23:29:15.926624 containerd[1486]: 2026-04-17 23:29:15.908 [INFO][5523] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:29:15.926624 containerd[1486]: 2026-04-17 23:29:15.908 [INFO][5523] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:29:15.926624 containerd[1486]: 2026-04-17 23:29:15.919 [WARNING][5523] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="b10bb5f75dd77044fda6ea9e99473b060ca7c711666de3188966609b987893e0" HandleID="k8s-pod-network.b10bb5f75dd77044fda6ea9e99473b060ca7c711666de3188966609b987893e0" Workload="ci--4081--3--6--n--6417c65d59-k8s-csi--node--driver--zxqb4-eth0" Apr 17 23:29:15.926624 containerd[1486]: 2026-04-17 23:29:15.919 [INFO][5523] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="b10bb5f75dd77044fda6ea9e99473b060ca7c711666de3188966609b987893e0" HandleID="k8s-pod-network.b10bb5f75dd77044fda6ea9e99473b060ca7c711666de3188966609b987893e0" Workload="ci--4081--3--6--n--6417c65d59-k8s-csi--node--driver--zxqb4-eth0" Apr 17 23:29:15.926624 containerd[1486]: 2026-04-17 23:29:15.922 [INFO][5523] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:29:15.926624 containerd[1486]: 2026-04-17 23:29:15.924 [INFO][5516] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="b10bb5f75dd77044fda6ea9e99473b060ca7c711666de3188966609b987893e0" Apr 17 23:29:15.926624 containerd[1486]: time="2026-04-17T23:29:15.926042290Z" level=info msg="TearDown network for sandbox \"b10bb5f75dd77044fda6ea9e99473b060ca7c711666de3188966609b987893e0\" successfully" Apr 17 23:29:15.930868 containerd[1486]: time="2026-04-17T23:29:15.930814913Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b10bb5f75dd77044fda6ea9e99473b060ca7c711666de3188966609b987893e0\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 17 23:29:15.931169 containerd[1486]: time="2026-04-17T23:29:15.931058714Z" level=info msg="RemovePodSandbox \"b10bb5f75dd77044fda6ea9e99473b060ca7c711666de3188966609b987893e0\" returns successfully" Apr 17 23:29:15.931602 containerd[1486]: time="2026-04-17T23:29:15.931576517Z" level=info msg="StopPodSandbox for \"09c200f32e1a0cdf9974950a9011ba1d94c3d018440639feffee8d28e0b2d7f8\"" Apr 17 23:29:16.025315 containerd[1486]: 2026-04-17 23:29:15.976 [WARNING][5537] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="09c200f32e1a0cdf9974950a9011ba1d94c3d018440639feffee8d28e0b2d7f8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--6417c65d59-k8s-calico--apiserver--5865bd758--tk57g-eth0", GenerateName:"calico-apiserver-5865bd758-", Namespace:"calico-system", SelfLink:"", UID:"ce1c3055-4d37-4bc7-b4f9-51d0ad58aa3c", ResourceVersion:"922", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 28, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5865bd758", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-6417c65d59", ContainerID:"1a96af700026b2684941639fa6c450073d714fd05175f82659283f4adda3aa1c", Pod:"calico-apiserver-5865bd758-tk57g", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.74.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali4930f841eea", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:29:16.025315 containerd[1486]: 2026-04-17 23:29:15.977 [INFO][5537] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="09c200f32e1a0cdf9974950a9011ba1d94c3d018440639feffee8d28e0b2d7f8" Apr 17 23:29:16.025315 containerd[1486]: 2026-04-17 23:29:15.977 [INFO][5537] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="09c200f32e1a0cdf9974950a9011ba1d94c3d018440639feffee8d28e0b2d7f8" iface="eth0" netns="" Apr 17 23:29:16.025315 containerd[1486]: 2026-04-17 23:29:15.977 [INFO][5537] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="09c200f32e1a0cdf9974950a9011ba1d94c3d018440639feffee8d28e0b2d7f8" Apr 17 23:29:16.025315 containerd[1486]: 2026-04-17 23:29:15.977 [INFO][5537] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="09c200f32e1a0cdf9974950a9011ba1d94c3d018440639feffee8d28e0b2d7f8" Apr 17 23:29:16.025315 containerd[1486]: 2026-04-17 23:29:16.007 [INFO][5544] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="09c200f32e1a0cdf9974950a9011ba1d94c3d018440639feffee8d28e0b2d7f8" HandleID="k8s-pod-network.09c200f32e1a0cdf9974950a9011ba1d94c3d018440639feffee8d28e0b2d7f8" Workload="ci--4081--3--6--n--6417c65d59-k8s-calico--apiserver--5865bd758--tk57g-eth0" Apr 17 23:29:16.025315 containerd[1486]: 2026-04-17 23:29:16.008 [INFO][5544] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:29:16.025315 containerd[1486]: 2026-04-17 23:29:16.008 [INFO][5544] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:29:16.025315 containerd[1486]: 2026-04-17 23:29:16.019 [WARNING][5544] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="09c200f32e1a0cdf9974950a9011ba1d94c3d018440639feffee8d28e0b2d7f8" HandleID="k8s-pod-network.09c200f32e1a0cdf9974950a9011ba1d94c3d018440639feffee8d28e0b2d7f8" Workload="ci--4081--3--6--n--6417c65d59-k8s-calico--apiserver--5865bd758--tk57g-eth0" Apr 17 23:29:16.025315 containerd[1486]: 2026-04-17 23:29:16.019 [INFO][5544] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="09c200f32e1a0cdf9974950a9011ba1d94c3d018440639feffee8d28e0b2d7f8" HandleID="k8s-pod-network.09c200f32e1a0cdf9974950a9011ba1d94c3d018440639feffee8d28e0b2d7f8" Workload="ci--4081--3--6--n--6417c65d59-k8s-calico--apiserver--5865bd758--tk57g-eth0" Apr 17 23:29:16.025315 containerd[1486]: 2026-04-17 23:29:16.021 [INFO][5544] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:29:16.025315 containerd[1486]: 2026-04-17 23:29:16.023 [INFO][5537] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="09c200f32e1a0cdf9974950a9011ba1d94c3d018440639feffee8d28e0b2d7f8" Apr 17 23:29:16.025969 containerd[1486]: time="2026-04-17T23:29:16.025351396Z" level=info msg="TearDown network for sandbox \"09c200f32e1a0cdf9974950a9011ba1d94c3d018440639feffee8d28e0b2d7f8\" successfully" Apr 17 23:29:16.025969 containerd[1486]: time="2026-04-17T23:29:16.025392717Z" level=info msg="StopPodSandbox for \"09c200f32e1a0cdf9974950a9011ba1d94c3d018440639feffee8d28e0b2d7f8\" returns successfully" Apr 17 23:29:16.026074 containerd[1486]: time="2026-04-17T23:29:16.026036679Z" level=info msg="RemovePodSandbox for \"09c200f32e1a0cdf9974950a9011ba1d94c3d018440639feffee8d28e0b2d7f8\"" Apr 17 23:29:16.026158 containerd[1486]: time="2026-04-17T23:29:16.026078120Z" level=info msg="Forcibly stopping sandbox \"09c200f32e1a0cdf9974950a9011ba1d94c3d018440639feffee8d28e0b2d7f8\"" Apr 17 23:29:16.120545 containerd[1486]: 2026-04-17 23:29:16.073 [WARNING][5558] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="09c200f32e1a0cdf9974950a9011ba1d94c3d018440639feffee8d28e0b2d7f8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--6417c65d59-k8s-calico--apiserver--5865bd758--tk57g-eth0", GenerateName:"calico-apiserver-5865bd758-", Namespace:"calico-system", SelfLink:"", UID:"ce1c3055-4d37-4bc7-b4f9-51d0ad58aa3c", ResourceVersion:"922", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 28, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5865bd758", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-6417c65d59", ContainerID:"1a96af700026b2684941639fa6c450073d714fd05175f82659283f4adda3aa1c", Pod:"calico-apiserver-5865bd758-tk57g", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.74.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali4930f841eea", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:29:16.120545 containerd[1486]: 2026-04-17 23:29:16.074 [INFO][5558] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="09c200f32e1a0cdf9974950a9011ba1d94c3d018440639feffee8d28e0b2d7f8" Apr 17 23:29:16.120545 containerd[1486]: 2026-04-17 23:29:16.074 [INFO][5558] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="09c200f32e1a0cdf9974950a9011ba1d94c3d018440639feffee8d28e0b2d7f8" iface="eth0" netns="" Apr 17 23:29:16.120545 containerd[1486]: 2026-04-17 23:29:16.074 [INFO][5558] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="09c200f32e1a0cdf9974950a9011ba1d94c3d018440639feffee8d28e0b2d7f8" Apr 17 23:29:16.120545 containerd[1486]: 2026-04-17 23:29:16.074 [INFO][5558] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="09c200f32e1a0cdf9974950a9011ba1d94c3d018440639feffee8d28e0b2d7f8" Apr 17 23:29:16.120545 containerd[1486]: 2026-04-17 23:29:16.100 [INFO][5565] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="09c200f32e1a0cdf9974950a9011ba1d94c3d018440639feffee8d28e0b2d7f8" HandleID="k8s-pod-network.09c200f32e1a0cdf9974950a9011ba1d94c3d018440639feffee8d28e0b2d7f8" Workload="ci--4081--3--6--n--6417c65d59-k8s-calico--apiserver--5865bd758--tk57g-eth0" Apr 17 23:29:16.120545 containerd[1486]: 2026-04-17 23:29:16.100 [INFO][5565] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:29:16.120545 containerd[1486]: 2026-04-17 23:29:16.100 [INFO][5565] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:29:16.120545 containerd[1486]: 2026-04-17 23:29:16.112 [WARNING][5565] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="09c200f32e1a0cdf9974950a9011ba1d94c3d018440639feffee8d28e0b2d7f8" HandleID="k8s-pod-network.09c200f32e1a0cdf9974950a9011ba1d94c3d018440639feffee8d28e0b2d7f8" Workload="ci--4081--3--6--n--6417c65d59-k8s-calico--apiserver--5865bd758--tk57g-eth0" Apr 17 23:29:16.120545 containerd[1486]: 2026-04-17 23:29:16.112 [INFO][5565] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="09c200f32e1a0cdf9974950a9011ba1d94c3d018440639feffee8d28e0b2d7f8" HandleID="k8s-pod-network.09c200f32e1a0cdf9974950a9011ba1d94c3d018440639feffee8d28e0b2d7f8" Workload="ci--4081--3--6--n--6417c65d59-k8s-calico--apiserver--5865bd758--tk57g-eth0" Apr 17 23:29:16.120545 containerd[1486]: 2026-04-17 23:29:16.114 [INFO][5565] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:29:16.120545 containerd[1486]: 2026-04-17 23:29:16.117 [INFO][5558] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="09c200f32e1a0cdf9974950a9011ba1d94c3d018440639feffee8d28e0b2d7f8" Apr 17 23:29:16.121905 containerd[1486]: time="2026-04-17T23:29:16.120589902Z" level=info msg="TearDown network for sandbox \"09c200f32e1a0cdf9974950a9011ba1d94c3d018440639feffee8d28e0b2d7f8\" successfully" Apr 17 23:29:16.140027 containerd[1486]: time="2026-04-17T23:29:16.139956828Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"09c200f32e1a0cdf9974950a9011ba1d94c3d018440639feffee8d28e0b2d7f8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 17 23:29:16.140027 containerd[1486]: time="2026-04-17T23:29:16.140036909Z" level=info msg="RemovePodSandbox \"09c200f32e1a0cdf9974950a9011ba1d94c3d018440639feffee8d28e0b2d7f8\" returns successfully" Apr 17 23:29:16.140645 containerd[1486]: time="2026-04-17T23:29:16.140557271Z" level=info msg="StopPodSandbox for \"96dea844bc0b8d926120d6c8634e879d57072b5e43e47b04ebf3c1c9792c60bc\"" Apr 17 23:29:16.232778 containerd[1486]: 2026-04-17 23:29:16.184 [WARNING][5579] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="96dea844bc0b8d926120d6c8634e879d57072b5e43e47b04ebf3c1c9792c60bc" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--6417c65d59-k8s-goldmane--cccfbd5cf--kds4v-eth0", GenerateName:"goldmane-cccfbd5cf-", Namespace:"calico-system", SelfLink:"", UID:"8ec0d86c-9bed-4b8a-bac2-d5b0be9c767e", ResourceVersion:"1030", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 28, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"cccfbd5cf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-6417c65d59", ContainerID:"228a512fe399734ac918e2cc3086d9115ff5ce555ad3e563cd19000c981b44e9", Pod:"goldmane-cccfbd5cf-kds4v", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.74.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali1468a549822", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:29:16.232778 containerd[1486]: 2026-04-17 23:29:16.184 [INFO][5579] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="96dea844bc0b8d926120d6c8634e879d57072b5e43e47b04ebf3c1c9792c60bc" Apr 17 23:29:16.232778 containerd[1486]: 2026-04-17 23:29:16.184 [INFO][5579] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="96dea844bc0b8d926120d6c8634e879d57072b5e43e47b04ebf3c1c9792c60bc" iface="eth0" netns="" Apr 17 23:29:16.232778 containerd[1486]: 2026-04-17 23:29:16.184 [INFO][5579] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="96dea844bc0b8d926120d6c8634e879d57072b5e43e47b04ebf3c1c9792c60bc" Apr 17 23:29:16.232778 containerd[1486]: 2026-04-17 23:29:16.184 [INFO][5579] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="96dea844bc0b8d926120d6c8634e879d57072b5e43e47b04ebf3c1c9792c60bc" Apr 17 23:29:16.232778 containerd[1486]: 2026-04-17 23:29:16.209 [INFO][5586] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="96dea844bc0b8d926120d6c8634e879d57072b5e43e47b04ebf3c1c9792c60bc" HandleID="k8s-pod-network.96dea844bc0b8d926120d6c8634e879d57072b5e43e47b04ebf3c1c9792c60bc" Workload="ci--4081--3--6--n--6417c65d59-k8s-goldmane--cccfbd5cf--kds4v-eth0" Apr 17 23:29:16.232778 containerd[1486]: 2026-04-17 23:29:16.209 [INFO][5586] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:29:16.232778 containerd[1486]: 2026-04-17 23:29:16.209 [INFO][5586] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:29:16.232778 containerd[1486]: 2026-04-17 23:29:16.226 [WARNING][5586] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="96dea844bc0b8d926120d6c8634e879d57072b5e43e47b04ebf3c1c9792c60bc" HandleID="k8s-pod-network.96dea844bc0b8d926120d6c8634e879d57072b5e43e47b04ebf3c1c9792c60bc" Workload="ci--4081--3--6--n--6417c65d59-k8s-goldmane--cccfbd5cf--kds4v-eth0" Apr 17 23:29:16.232778 containerd[1486]: 2026-04-17 23:29:16.226 [INFO][5586] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="96dea844bc0b8d926120d6c8634e879d57072b5e43e47b04ebf3c1c9792c60bc" HandleID="k8s-pod-network.96dea844bc0b8d926120d6c8634e879d57072b5e43e47b04ebf3c1c9792c60bc" Workload="ci--4081--3--6--n--6417c65d59-k8s-goldmane--cccfbd5cf--kds4v-eth0" Apr 17 23:29:16.232778 containerd[1486]: 2026-04-17 23:29:16.228 [INFO][5586] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:29:16.232778 containerd[1486]: 2026-04-17 23:29:16.230 [INFO][5579] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="96dea844bc0b8d926120d6c8634e879d57072b5e43e47b04ebf3c1c9792c60bc" Apr 17 23:29:16.232778 containerd[1486]: time="2026-04-17T23:29:16.232665003Z" level=info msg="TearDown network for sandbox \"96dea844bc0b8d926120d6c8634e879d57072b5e43e47b04ebf3c1c9792c60bc\" successfully" Apr 17 23:29:16.232778 containerd[1486]: time="2026-04-17T23:29:16.232691083Z" level=info msg="StopPodSandbox for \"96dea844bc0b8d926120d6c8634e879d57072b5e43e47b04ebf3c1c9792c60bc\" returns successfully" Apr 17 23:29:16.234753 containerd[1486]: time="2026-04-17T23:29:16.234572171Z" level=info msg="RemovePodSandbox for \"96dea844bc0b8d926120d6c8634e879d57072b5e43e47b04ebf3c1c9792c60bc\"" Apr 17 23:29:16.234753 containerd[1486]: time="2026-04-17T23:29:16.234637611Z" level=info msg="Forcibly stopping sandbox \"96dea844bc0b8d926120d6c8634e879d57072b5e43e47b04ebf3c1c9792c60bc\"" Apr 17 23:29:16.315750 containerd[1486]: 2026-04-17 23:29:16.273 [WARNING][5601] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="96dea844bc0b8d926120d6c8634e879d57072b5e43e47b04ebf3c1c9792c60bc" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--6417c65d59-k8s-goldmane--cccfbd5cf--kds4v-eth0", GenerateName:"goldmane-cccfbd5cf-", Namespace:"calico-system", SelfLink:"", UID:"8ec0d86c-9bed-4b8a-bac2-d5b0be9c767e", ResourceVersion:"1030", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 28, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"cccfbd5cf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-6417c65d59", ContainerID:"228a512fe399734ac918e2cc3086d9115ff5ce555ad3e563cd19000c981b44e9", Pod:"goldmane-cccfbd5cf-kds4v", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.74.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali1468a549822", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:29:16.315750 containerd[1486]: 2026-04-17 23:29:16.274 [INFO][5601] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="96dea844bc0b8d926120d6c8634e879d57072b5e43e47b04ebf3c1c9792c60bc" Apr 17 23:29:16.315750 containerd[1486]: 2026-04-17 23:29:16.274 [INFO][5601] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="96dea844bc0b8d926120d6c8634e879d57072b5e43e47b04ebf3c1c9792c60bc" iface="eth0" netns="" Apr 17 23:29:16.315750 containerd[1486]: 2026-04-17 23:29:16.274 [INFO][5601] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="96dea844bc0b8d926120d6c8634e879d57072b5e43e47b04ebf3c1c9792c60bc" Apr 17 23:29:16.315750 containerd[1486]: 2026-04-17 23:29:16.274 [INFO][5601] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="96dea844bc0b8d926120d6c8634e879d57072b5e43e47b04ebf3c1c9792c60bc" Apr 17 23:29:16.315750 containerd[1486]: 2026-04-17 23:29:16.296 [INFO][5608] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="96dea844bc0b8d926120d6c8634e879d57072b5e43e47b04ebf3c1c9792c60bc" HandleID="k8s-pod-network.96dea844bc0b8d926120d6c8634e879d57072b5e43e47b04ebf3c1c9792c60bc" Workload="ci--4081--3--6--n--6417c65d59-k8s-goldmane--cccfbd5cf--kds4v-eth0" Apr 17 23:29:16.315750 containerd[1486]: 2026-04-17 23:29:16.296 [INFO][5608] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:29:16.315750 containerd[1486]: 2026-04-17 23:29:16.296 [INFO][5608] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:29:16.315750 containerd[1486]: 2026-04-17 23:29:16.309 [WARNING][5608] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="96dea844bc0b8d926120d6c8634e879d57072b5e43e47b04ebf3c1c9792c60bc" HandleID="k8s-pod-network.96dea844bc0b8d926120d6c8634e879d57072b5e43e47b04ebf3c1c9792c60bc" Workload="ci--4081--3--6--n--6417c65d59-k8s-goldmane--cccfbd5cf--kds4v-eth0" Apr 17 23:29:16.315750 containerd[1486]: 2026-04-17 23:29:16.309 [INFO][5608] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="96dea844bc0b8d926120d6c8634e879d57072b5e43e47b04ebf3c1c9792c60bc" HandleID="k8s-pod-network.96dea844bc0b8d926120d6c8634e879d57072b5e43e47b04ebf3c1c9792c60bc" Workload="ci--4081--3--6--n--6417c65d59-k8s-goldmane--cccfbd5cf--kds4v-eth0" Apr 17 23:29:16.315750 containerd[1486]: 2026-04-17 23:29:16.311 [INFO][5608] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:29:16.315750 containerd[1486]: 2026-04-17 23:29:16.313 [INFO][5601] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="96dea844bc0b8d926120d6c8634e879d57072b5e43e47b04ebf3c1c9792c60bc" Apr 17 23:29:16.317305 containerd[1486]: time="2026-04-17T23:29:16.315977215Z" level=info msg="TearDown network for sandbox \"96dea844bc0b8d926120d6c8634e879d57072b5e43e47b04ebf3c1c9792c60bc\" successfully" Apr 17 23:29:16.320649 containerd[1486]: time="2026-04-17T23:29:16.320606755Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"96dea844bc0b8d926120d6c8634e879d57072b5e43e47b04ebf3c1c9792c60bc\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 17 23:29:16.320911 containerd[1486]: time="2026-04-17T23:29:16.320869357Z" level=info msg="RemovePodSandbox \"96dea844bc0b8d926120d6c8634e879d57072b5e43e47b04ebf3c1c9792c60bc\" returns successfully" Apr 17 23:29:16.321638 containerd[1486]: time="2026-04-17T23:29:16.321607800Z" level=info msg="StopPodSandbox for \"bc473a3d31ecf60c941597f3bb3fada00e6673948a5dc3878e48a11f8fc8f26b\"" Apr 17 23:29:16.404452 containerd[1486]: 2026-04-17 23:29:16.362 [WARNING][5622] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="bc473a3d31ecf60c941597f3bb3fada00e6673948a5dc3878e48a11f8fc8f26b" WorkloadEndpoint="ci--4081--3--6--n--6417c65d59-k8s-whisker--68d6dbbfb6--zsktw-eth0" Apr 17 23:29:16.404452 containerd[1486]: 2026-04-17 23:29:16.363 [INFO][5622] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="bc473a3d31ecf60c941597f3bb3fada00e6673948a5dc3878e48a11f8fc8f26b" Apr 17 23:29:16.404452 containerd[1486]: 2026-04-17 23:29:16.363 [INFO][5622] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="bc473a3d31ecf60c941597f3bb3fada00e6673948a5dc3878e48a11f8fc8f26b" iface="eth0" netns="" Apr 17 23:29:16.404452 containerd[1486]: 2026-04-17 23:29:16.363 [INFO][5622] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="bc473a3d31ecf60c941597f3bb3fada00e6673948a5dc3878e48a11f8fc8f26b" Apr 17 23:29:16.404452 containerd[1486]: 2026-04-17 23:29:16.363 [INFO][5622] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="bc473a3d31ecf60c941597f3bb3fada00e6673948a5dc3878e48a11f8fc8f26b" Apr 17 23:29:16.404452 containerd[1486]: 2026-04-17 23:29:16.385 [INFO][5630] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="bc473a3d31ecf60c941597f3bb3fada00e6673948a5dc3878e48a11f8fc8f26b" HandleID="k8s-pod-network.bc473a3d31ecf60c941597f3bb3fada00e6673948a5dc3878e48a11f8fc8f26b" Workload="ci--4081--3--6--n--6417c65d59-k8s-whisker--68d6dbbfb6--zsktw-eth0" Apr 17 23:29:16.404452 containerd[1486]: 2026-04-17 23:29:16.386 [INFO][5630] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:29:16.404452 containerd[1486]: 2026-04-17 23:29:16.386 [INFO][5630] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:29:16.404452 containerd[1486]: 2026-04-17 23:29:16.398 [WARNING][5630] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="bc473a3d31ecf60c941597f3bb3fada00e6673948a5dc3878e48a11f8fc8f26b" HandleID="k8s-pod-network.bc473a3d31ecf60c941597f3bb3fada00e6673948a5dc3878e48a11f8fc8f26b" Workload="ci--4081--3--6--n--6417c65d59-k8s-whisker--68d6dbbfb6--zsktw-eth0" Apr 17 23:29:16.404452 containerd[1486]: 2026-04-17 23:29:16.398 [INFO][5630] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="bc473a3d31ecf60c941597f3bb3fada00e6673948a5dc3878e48a11f8fc8f26b" HandleID="k8s-pod-network.bc473a3d31ecf60c941597f3bb3fada00e6673948a5dc3878e48a11f8fc8f26b" Workload="ci--4081--3--6--n--6417c65d59-k8s-whisker--68d6dbbfb6--zsktw-eth0" Apr 17 23:29:16.404452 containerd[1486]: 2026-04-17 23:29:16.400 [INFO][5630] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:29:16.404452 containerd[1486]: 2026-04-17 23:29:16.402 [INFO][5622] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="bc473a3d31ecf60c941597f3bb3fada00e6673948a5dc3878e48a11f8fc8f26b" Apr 17 23:29:16.405165 containerd[1486]: time="2026-04-17T23:29:16.404525050Z" level=info msg="TearDown network for sandbox \"bc473a3d31ecf60c941597f3bb3fada00e6673948a5dc3878e48a11f8fc8f26b\" successfully" Apr 17 23:29:16.405165 containerd[1486]: time="2026-04-17T23:29:16.404562771Z" level=info msg="StopPodSandbox for \"bc473a3d31ecf60c941597f3bb3fada00e6673948a5dc3878e48a11f8fc8f26b\" returns successfully" Apr 17 23:29:16.405321 containerd[1486]: time="2026-04-17T23:29:16.405273054Z" level=info msg="RemovePodSandbox for \"bc473a3d31ecf60c941597f3bb3fada00e6673948a5dc3878e48a11f8fc8f26b\"" Apr 17 23:29:16.405421 containerd[1486]: time="2026-04-17T23:29:16.405351414Z" level=info msg="Forcibly stopping sandbox \"bc473a3d31ecf60c941597f3bb3fada00e6673948a5dc3878e48a11f8fc8f26b\"" Apr 17 23:29:16.497347 containerd[1486]: 2026-04-17 23:29:16.449 [WARNING][5644] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="bc473a3d31ecf60c941597f3bb3fada00e6673948a5dc3878e48a11f8fc8f26b" WorkloadEndpoint="ci--4081--3--6--n--6417c65d59-k8s-whisker--68d6dbbfb6--zsktw-eth0" Apr 17 23:29:16.497347 containerd[1486]: 2026-04-17 23:29:16.449 [INFO][5644] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="bc473a3d31ecf60c941597f3bb3fada00e6673948a5dc3878e48a11f8fc8f26b" Apr 17 23:29:16.497347 containerd[1486]: 2026-04-17 23:29:16.449 [INFO][5644] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="bc473a3d31ecf60c941597f3bb3fada00e6673948a5dc3878e48a11f8fc8f26b" iface="eth0" netns="" Apr 17 23:29:16.497347 containerd[1486]: 2026-04-17 23:29:16.449 [INFO][5644] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="bc473a3d31ecf60c941597f3bb3fada00e6673948a5dc3878e48a11f8fc8f26b" Apr 17 23:29:16.497347 containerd[1486]: 2026-04-17 23:29:16.449 [INFO][5644] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="bc473a3d31ecf60c941597f3bb3fada00e6673948a5dc3878e48a11f8fc8f26b" Apr 17 23:29:16.497347 containerd[1486]: 2026-04-17 23:29:16.475 [INFO][5651] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="bc473a3d31ecf60c941597f3bb3fada00e6673948a5dc3878e48a11f8fc8f26b" HandleID="k8s-pod-network.bc473a3d31ecf60c941597f3bb3fada00e6673948a5dc3878e48a11f8fc8f26b" Workload="ci--4081--3--6--n--6417c65d59-k8s-whisker--68d6dbbfb6--zsktw-eth0" Apr 17 23:29:16.497347 containerd[1486]: 2026-04-17 23:29:16.476 [INFO][5651] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:29:16.497347 containerd[1486]: 2026-04-17 23:29:16.476 [INFO][5651] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:29:16.497347 containerd[1486]: 2026-04-17 23:29:16.491 [WARNING][5651] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="bc473a3d31ecf60c941597f3bb3fada00e6673948a5dc3878e48a11f8fc8f26b" HandleID="k8s-pod-network.bc473a3d31ecf60c941597f3bb3fada00e6673948a5dc3878e48a11f8fc8f26b" Workload="ci--4081--3--6--n--6417c65d59-k8s-whisker--68d6dbbfb6--zsktw-eth0" Apr 17 23:29:16.497347 containerd[1486]: 2026-04-17 23:29:16.491 [INFO][5651] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="bc473a3d31ecf60c941597f3bb3fada00e6673948a5dc3878e48a11f8fc8f26b" HandleID="k8s-pod-network.bc473a3d31ecf60c941597f3bb3fada00e6673948a5dc3878e48a11f8fc8f26b" Workload="ci--4081--3--6--n--6417c65d59-k8s-whisker--68d6dbbfb6--zsktw-eth0" Apr 17 23:29:16.497347 containerd[1486]: 2026-04-17 23:29:16.493 [INFO][5651] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:29:16.497347 containerd[1486]: 2026-04-17 23:29:16.495 [INFO][5644] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="bc473a3d31ecf60c941597f3bb3fada00e6673948a5dc3878e48a11f8fc8f26b" Apr 17 23:29:16.498096 containerd[1486]: time="2026-04-17T23:29:16.497984308Z" level=info msg="TearDown network for sandbox \"bc473a3d31ecf60c941597f3bb3fada00e6673948a5dc3878e48a11f8fc8f26b\" successfully" Apr 17 23:29:16.503751 containerd[1486]: time="2026-04-17T23:29:16.503707133Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"bc473a3d31ecf60c941597f3bb3fada00e6673948a5dc3878e48a11f8fc8f26b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 17 23:29:16.504065 containerd[1486]: time="2026-04-17T23:29:16.503968175Z" level=info msg="RemovePodSandbox \"bc473a3d31ecf60c941597f3bb3fada00e6673948a5dc3878e48a11f8fc8f26b\" returns successfully" Apr 17 23:29:16.504558 containerd[1486]: time="2026-04-17T23:29:16.504532977Z" level=info msg="StopPodSandbox for \"b3080180d8d38ae2028ed71916812eeb03cf94b645207feb24a50cf88f46f1ce\"" Apr 17 23:29:16.603074 containerd[1486]: 2026-04-17 23:29:16.546 [WARNING][5665] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b3080180d8d38ae2028ed71916812eeb03cf94b645207feb24a50cf88f46f1ce" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--6417c65d59-k8s-coredns--66bc5c9577--tkqp6-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"d866b1df-e265-46c7-a073-f92742fbb2a8", ResourceVersion:"1016", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 28, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-6417c65d59", ContainerID:"817fc56864e7578ccb12ab85c1ac5641c62c2acd94c2a3f8fd85365bfcb85d51", Pod:"coredns-66bc5c9577-tkqp6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.74.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali04e7ecec06f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:29:16.603074 containerd[1486]: 2026-04-17 23:29:16.547 [INFO][5665] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="b3080180d8d38ae2028ed71916812eeb03cf94b645207feb24a50cf88f46f1ce" Apr 17 23:29:16.603074 containerd[1486]: 2026-04-17 23:29:16.547 [INFO][5665] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b3080180d8d38ae2028ed71916812eeb03cf94b645207feb24a50cf88f46f1ce" iface="eth0" netns="" Apr 17 23:29:16.603074 containerd[1486]: 2026-04-17 23:29:16.547 [INFO][5665] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="b3080180d8d38ae2028ed71916812eeb03cf94b645207feb24a50cf88f46f1ce" Apr 17 23:29:16.603074 containerd[1486]: 2026-04-17 23:29:16.547 [INFO][5665] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="b3080180d8d38ae2028ed71916812eeb03cf94b645207feb24a50cf88f46f1ce" Apr 17 23:29:16.603074 containerd[1486]: 2026-04-17 23:29:16.575 [INFO][5672] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="b3080180d8d38ae2028ed71916812eeb03cf94b645207feb24a50cf88f46f1ce" HandleID="k8s-pod-network.b3080180d8d38ae2028ed71916812eeb03cf94b645207feb24a50cf88f46f1ce" Workload="ci--4081--3--6--n--6417c65d59-k8s-coredns--66bc5c9577--tkqp6-eth0" Apr 17 23:29:16.603074 containerd[1486]: 2026-04-17 23:29:16.575 [INFO][5672] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:29:16.603074 containerd[1486]: 2026-04-17 23:29:16.575 [INFO][5672] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:29:16.603074 containerd[1486]: 2026-04-17 23:29:16.593 [WARNING][5672] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="b3080180d8d38ae2028ed71916812eeb03cf94b645207feb24a50cf88f46f1ce" HandleID="k8s-pod-network.b3080180d8d38ae2028ed71916812eeb03cf94b645207feb24a50cf88f46f1ce" Workload="ci--4081--3--6--n--6417c65d59-k8s-coredns--66bc5c9577--tkqp6-eth0" Apr 17 23:29:16.603074 containerd[1486]: 2026-04-17 23:29:16.593 [INFO][5672] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="b3080180d8d38ae2028ed71916812eeb03cf94b645207feb24a50cf88f46f1ce" HandleID="k8s-pod-network.b3080180d8d38ae2028ed71916812eeb03cf94b645207feb24a50cf88f46f1ce" Workload="ci--4081--3--6--n--6417c65d59-k8s-coredns--66bc5c9577--tkqp6-eth0" Apr 17 23:29:16.603074 containerd[1486]: 2026-04-17 23:29:16.596 [INFO][5672] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:29:16.603074 containerd[1486]: 2026-04-17 23:29:16.599 [INFO][5665] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="b3080180d8d38ae2028ed71916812eeb03cf94b645207feb24a50cf88f46f1ce" Apr 17 23:29:16.604316 containerd[1486]: time="2026-04-17T23:29:16.603116858Z" level=info msg="TearDown network for sandbox \"b3080180d8d38ae2028ed71916812eeb03cf94b645207feb24a50cf88f46f1ce\" successfully" Apr 17 23:29:16.604316 containerd[1486]: time="2026-04-17T23:29:16.603143778Z" level=info msg="StopPodSandbox for \"b3080180d8d38ae2028ed71916812eeb03cf94b645207feb24a50cf88f46f1ce\" returns successfully" Apr 17 23:29:16.604316 containerd[1486]: time="2026-04-17T23:29:16.603948541Z" level=info msg="RemovePodSandbox for \"b3080180d8d38ae2028ed71916812eeb03cf94b645207feb24a50cf88f46f1ce\"" Apr 17 23:29:16.604316 containerd[1486]: time="2026-04-17T23:29:16.603996902Z" level=info msg="Forcibly stopping sandbox \"b3080180d8d38ae2028ed71916812eeb03cf94b645207feb24a50cf88f46f1ce\"" Apr 17 23:29:16.713963 containerd[1486]: 2026-04-17 23:29:16.670 [WARNING][5686] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b3080180d8d38ae2028ed71916812eeb03cf94b645207feb24a50cf88f46f1ce" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--6417c65d59-k8s-coredns--66bc5c9577--tkqp6-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"d866b1df-e265-46c7-a073-f92742fbb2a8", ResourceVersion:"1016", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 28, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-6417c65d59", ContainerID:"817fc56864e7578ccb12ab85c1ac5641c62c2acd94c2a3f8fd85365bfcb85d51", Pod:"coredns-66bc5c9577-tkqp6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.74.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali04e7ecec06f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:29:16.713963 containerd[1486]: 2026-04-17 23:29:16.671 [INFO][5686] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="b3080180d8d38ae2028ed71916812eeb03cf94b645207feb24a50cf88f46f1ce" Apr 17 23:29:16.713963 containerd[1486]: 2026-04-17 23:29:16.671 [INFO][5686] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b3080180d8d38ae2028ed71916812eeb03cf94b645207feb24a50cf88f46f1ce" iface="eth0" netns="" Apr 17 23:29:16.713963 containerd[1486]: 2026-04-17 23:29:16.671 [INFO][5686] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="b3080180d8d38ae2028ed71916812eeb03cf94b645207feb24a50cf88f46f1ce" Apr 17 23:29:16.713963 containerd[1486]: 2026-04-17 23:29:16.671 [INFO][5686] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="b3080180d8d38ae2028ed71916812eeb03cf94b645207feb24a50cf88f46f1ce" Apr 17 23:29:16.713963 containerd[1486]: 2026-04-17 23:29:16.692 [INFO][5699] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="b3080180d8d38ae2028ed71916812eeb03cf94b645207feb24a50cf88f46f1ce" HandleID="k8s-pod-network.b3080180d8d38ae2028ed71916812eeb03cf94b645207feb24a50cf88f46f1ce" Workload="ci--4081--3--6--n--6417c65d59-k8s-coredns--66bc5c9577--tkqp6-eth0" Apr 17 23:29:16.713963 containerd[1486]: 2026-04-17 23:29:16.692 [INFO][5699] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:29:16.713963 containerd[1486]: 2026-04-17 23:29:16.692 [INFO][5699] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:29:16.713963 containerd[1486]: 2026-04-17 23:29:16.707 [WARNING][5699] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="b3080180d8d38ae2028ed71916812eeb03cf94b645207feb24a50cf88f46f1ce" HandleID="k8s-pod-network.b3080180d8d38ae2028ed71916812eeb03cf94b645207feb24a50cf88f46f1ce" Workload="ci--4081--3--6--n--6417c65d59-k8s-coredns--66bc5c9577--tkqp6-eth0" Apr 17 23:29:16.713963 containerd[1486]: 2026-04-17 23:29:16.707 [INFO][5699] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="b3080180d8d38ae2028ed71916812eeb03cf94b645207feb24a50cf88f46f1ce" HandleID="k8s-pod-network.b3080180d8d38ae2028ed71916812eeb03cf94b645207feb24a50cf88f46f1ce" Workload="ci--4081--3--6--n--6417c65d59-k8s-coredns--66bc5c9577--tkqp6-eth0" Apr 17 23:29:16.713963 containerd[1486]: 2026-04-17 23:29:16.710 [INFO][5699] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:29:16.713963 containerd[1486]: 2026-04-17 23:29:16.712 [INFO][5686] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="b3080180d8d38ae2028ed71916812eeb03cf94b645207feb24a50cf88f46f1ce" Apr 17 23:29:16.713963 containerd[1486]: time="2026-04-17T23:29:16.713766832Z" level=info msg="TearDown network for sandbox \"b3080180d8d38ae2028ed71916812eeb03cf94b645207feb24a50cf88f46f1ce\" successfully" Apr 17 23:29:16.719616 containerd[1486]: time="2026-04-17T23:29:16.719556458Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b3080180d8d38ae2028ed71916812eeb03cf94b645207feb24a50cf88f46f1ce\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 17 23:29:16.719746 containerd[1486]: time="2026-04-17T23:29:16.719669658Z" level=info msg="RemovePodSandbox \"b3080180d8d38ae2028ed71916812eeb03cf94b645207feb24a50cf88f46f1ce\" returns successfully" Apr 17 23:29:19.829841 kubelet[2529]: I0417 23:29:19.828757 2529 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 17 23:29:20.689788 systemd[1]: Started sshd@9-159.69.127.159:22-50.85.169.122:59538.service - OpenSSH per-connection server daemon (50.85.169.122:59538). Apr 17 23:29:20.821788 sshd[5710]: Accepted publickey for core from 50.85.169.122 port 59538 ssh2: RSA SHA256:VfypDX1RTsDok1DcKRgqFkknflSVDpDNB07R6ghJc68 Apr 17 23:29:20.825623 sshd[5710]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:29:20.846963 systemd-logind[1466]: New session 10 of user core. Apr 17 23:29:20.855141 systemd[1]: Started session-10.scope - Session 10 of User core. Apr 17 23:29:21.056402 sshd[5710]: pam_unix(sshd:session): session closed for user core Apr 17 23:29:21.062321 systemd-logind[1466]: Session 10 logged out. Waiting for processes to exit. Apr 17 23:29:21.062517 systemd[1]: sshd@9-159.69.127.159:22-50.85.169.122:59538.service: Deactivated successfully. Apr 17 23:29:21.069289 systemd[1]: session-10.scope: Deactivated successfully. Apr 17 23:29:21.072829 systemd-logind[1466]: Removed session 10. Apr 17 23:29:26.092489 systemd[1]: Started sshd@10-159.69.127.159:22-50.85.169.122:59550.service - OpenSSH per-connection server daemon (50.85.169.122:59550). Apr 17 23:29:26.252967 sshd[5769]: Accepted publickey for core from 50.85.169.122 port 59550 ssh2: RSA SHA256:VfypDX1RTsDok1DcKRgqFkknflSVDpDNB07R6ghJc68 Apr 17 23:29:26.254859 sshd[5769]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:29:26.260587 systemd-logind[1466]: New session 11 of user core. Apr 17 23:29:26.272572 systemd[1]: Started session-11.scope - Session 11 of User core. Apr 17 23:29:26.487696 sshd[5769]: pam_unix(sshd:session): session closed for user core Apr 17 23:29:26.494190 systemd[1]: sshd@10-159.69.127.159:22-50.85.169.122:59550.service: Deactivated successfully. Apr 17 23:29:26.497017 systemd[1]: session-11.scope: Deactivated successfully. Apr 17 23:29:26.498350 systemd-logind[1466]: Session 11 logged out. Waiting for processes to exit. Apr 17 23:29:26.499442 systemd-logind[1466]: Removed session 11. Apr 17 23:29:26.518388 systemd[1]: Started sshd@11-159.69.127.159:22-50.85.169.122:59554.service - OpenSSH per-connection server daemon (50.85.169.122:59554). Apr 17 23:29:26.645767 sshd[5783]: Accepted publickey for core from 50.85.169.122 port 59554 ssh2: RSA SHA256:VfypDX1RTsDok1DcKRgqFkknflSVDpDNB07R6ghJc68 Apr 17 23:29:26.649516 sshd[5783]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:29:26.658716 systemd-logind[1466]: New session 12 of user core. Apr 17 23:29:26.664154 systemd[1]: Started session-12.scope - Session 12 of User core. Apr 17 23:29:26.909676 sshd[5783]: pam_unix(sshd:session): session closed for user core Apr 17 23:29:26.913966 systemd[1]: sshd@11-159.69.127.159:22-50.85.169.122:59554.service: Deactivated successfully. Apr 17 23:29:26.917933 systemd[1]: session-12.scope: Deactivated successfully. Apr 17 23:29:26.924219 systemd-logind[1466]: Session 12 logged out. Waiting for processes to exit. Apr 17 23:29:26.943365 systemd[1]: Started sshd@12-159.69.127.159:22-50.85.169.122:59564.service - OpenSSH per-connection server daemon (50.85.169.122:59564). Apr 17 23:29:26.945988 systemd-logind[1466]: Removed session 12. Apr 17 23:29:27.072255 sshd[5800]: Accepted publickey for core from 50.85.169.122 port 59564 ssh2: RSA SHA256:VfypDX1RTsDok1DcKRgqFkknflSVDpDNB07R6ghJc68 Apr 17 23:29:27.074515 sshd[5800]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:29:27.080548 systemd-logind[1466]: New session 13 of user core. Apr 17 23:29:27.092197 systemd[1]: Started session-13.scope - Session 13 of User core. Apr 17 23:29:27.268355 sshd[5800]: pam_unix(sshd:session): session closed for user core Apr 17 23:29:27.275548 systemd[1]: sshd@12-159.69.127.159:22-50.85.169.122:59564.service: Deactivated successfully. Apr 17 23:29:27.279249 systemd[1]: session-13.scope: Deactivated successfully. Apr 17 23:29:27.280820 systemd-logind[1466]: Session 13 logged out. Waiting for processes to exit. Apr 17 23:29:27.282522 systemd-logind[1466]: Removed session 13. Apr 17 23:29:30.616163 systemd[1]: run-containerd-runc-k8s.io-7ffdeb2e6eccaa131994f3c2906c0308f73d891844d5ac9cb366e83602a38d9f-runc.KejCff.mount: Deactivated successfully. Apr 17 23:29:32.307210 systemd[1]: Started sshd@13-159.69.127.159:22-50.85.169.122:59990.service - OpenSSH per-connection server daemon (50.85.169.122:59990). Apr 17 23:29:32.439265 sshd[5835]: Accepted publickey for core from 50.85.169.122 port 59990 ssh2: RSA SHA256:VfypDX1RTsDok1DcKRgqFkknflSVDpDNB07R6ghJc68 Apr 17 23:29:32.442612 sshd[5835]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:29:32.449741 systemd-logind[1466]: New session 14 of user core. Apr 17 23:29:32.454119 systemd[1]: Started session-14.scope - Session 14 of User core. Apr 17 23:29:32.651591 sshd[5835]: pam_unix(sshd:session): session closed for user core Apr 17 23:29:32.656960 systemd[1]: sshd@13-159.69.127.159:22-50.85.169.122:59990.service: Deactivated successfully. Apr 17 23:29:32.659819 systemd[1]: session-14.scope: Deactivated successfully. Apr 17 23:29:32.661095 systemd-logind[1466]: Session 14 logged out. Waiting for processes to exit. Apr 17 23:29:32.663017 systemd-logind[1466]: Removed session 14. Apr 17 23:29:37.682196 systemd[1]: Started sshd@14-159.69.127.159:22-50.85.169.122:60004.service - OpenSSH per-connection server daemon (50.85.169.122:60004). Apr 17 23:29:37.812480 sshd[5858]: Accepted publickey for core from 50.85.169.122 port 60004 ssh2: RSA SHA256:VfypDX1RTsDok1DcKRgqFkknflSVDpDNB07R6ghJc68 Apr 17 23:29:37.813930 sshd[5858]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:29:37.820062 systemd-logind[1466]: New session 15 of user core. Apr 17 23:29:37.830237 systemd[1]: Started session-15.scope - Session 15 of User core. Apr 17 23:29:38.004341 sshd[5858]: pam_unix(sshd:session): session closed for user core Apr 17 23:29:38.010406 systemd-logind[1466]: Session 15 logged out. Waiting for processes to exit. Apr 17 23:29:38.011288 systemd[1]: sshd@14-159.69.127.159:22-50.85.169.122:60004.service: Deactivated successfully. Apr 17 23:29:38.014611 systemd[1]: session-15.scope: Deactivated successfully. Apr 17 23:29:38.016703 systemd-logind[1466]: Removed session 15. Apr 17 23:29:38.038505 systemd[1]: Started sshd@15-159.69.127.159:22-50.85.169.122:60020.service - OpenSSH per-connection server daemon (50.85.169.122:60020). Apr 17 23:29:38.164634 sshd[5871]: Accepted publickey for core from 50.85.169.122 port 60020 ssh2: RSA SHA256:VfypDX1RTsDok1DcKRgqFkknflSVDpDNB07R6ghJc68 Apr 17 23:29:38.167930 sshd[5871]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:29:38.176860 systemd-logind[1466]: New session 16 of user core. Apr 17 23:29:38.184021 systemd[1]: Started session-16.scope - Session 16 of User core. Apr 17 23:29:38.545214 sshd[5871]: pam_unix(sshd:session): session closed for user core Apr 17 23:29:38.552045 systemd[1]: sshd@15-159.69.127.159:22-50.85.169.122:60020.service: Deactivated successfully. Apr 17 23:29:38.555071 systemd[1]: session-16.scope: Deactivated successfully. Apr 17 23:29:38.556350 systemd-logind[1466]: Session 16 logged out. Waiting for processes to exit. Apr 17 23:29:38.557756 systemd-logind[1466]: Removed session 16. Apr 17 23:29:38.572368 systemd[1]: Started sshd@16-159.69.127.159:22-50.85.169.122:60024.service - OpenSSH per-connection server daemon (50.85.169.122:60024). Apr 17 23:29:38.704364 sshd[5882]: Accepted publickey for core from 50.85.169.122 port 60024 ssh2: RSA SHA256:VfypDX1RTsDok1DcKRgqFkknflSVDpDNB07R6ghJc68 Apr 17 23:29:38.707184 sshd[5882]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:29:38.712940 systemd-logind[1466]: New session 17 of user core. Apr 17 23:29:38.721275 systemd[1]: Started session-17.scope - Session 17 of User core. Apr 17 23:29:39.584578 sshd[5882]: pam_unix(sshd:session): session closed for user core Apr 17 23:29:39.591617 systemd[1]: sshd@16-159.69.127.159:22-50.85.169.122:60024.service: Deactivated successfully. Apr 17 23:29:39.597603 systemd[1]: session-17.scope: Deactivated successfully. Apr 17 23:29:39.601162 systemd-logind[1466]: Session 17 logged out. Waiting for processes to exit. Apr 17 23:29:39.625182 systemd[1]: Started sshd@17-159.69.127.159:22-50.85.169.122:57238.service - OpenSSH per-connection server daemon (50.85.169.122:57238). Apr 17 23:29:39.626791 systemd-logind[1466]: Removed session 17. Apr 17 23:29:39.751695 sshd[5906]: Accepted publickey for core from 50.85.169.122 port 57238 ssh2: RSA SHA256:VfypDX1RTsDok1DcKRgqFkknflSVDpDNB07R6ghJc68 Apr 17 23:29:39.753581 sshd[5906]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:29:39.760434 systemd-logind[1466]: New session 18 of user core. Apr 17 23:29:39.766191 systemd[1]: Started session-18.scope - Session 18 of User core. Apr 17 23:29:40.089621 sshd[5906]: pam_unix(sshd:session): session closed for user core Apr 17 23:29:40.102182 systemd-logind[1466]: Session 18 logged out. Waiting for processes to exit. Apr 17 23:29:40.104323 systemd[1]: sshd@17-159.69.127.159:22-50.85.169.122:57238.service: Deactivated successfully. Apr 17 23:29:40.109105 systemd[1]: session-18.scope: Deactivated successfully. Apr 17 23:29:40.124347 systemd-logind[1466]: Removed session 18. Apr 17 23:29:40.133226 systemd[1]: Started sshd@18-159.69.127.159:22-50.85.169.122:57242.service - OpenSSH per-connection server daemon (50.85.169.122:57242). Apr 17 23:29:40.254502 sshd[5918]: Accepted publickey for core from 50.85.169.122 port 57242 ssh2: RSA SHA256:VfypDX1RTsDok1DcKRgqFkknflSVDpDNB07R6ghJc68 Apr 17 23:29:40.257821 sshd[5918]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:29:40.267900 systemd-logind[1466]: New session 19 of user core. Apr 17 23:29:40.269919 systemd[1]: Started session-19.scope - Session 19 of User core. Apr 17 23:29:40.464593 sshd[5918]: pam_unix(sshd:session): session closed for user core Apr 17 23:29:40.475270 systemd[1]: sshd@18-159.69.127.159:22-50.85.169.122:57242.service: Deactivated successfully. Apr 17 23:29:40.487364 systemd[1]: session-19.scope: Deactivated successfully. Apr 17 23:29:40.489305 systemd-logind[1466]: Session 19 logged out. Waiting for processes to exit. Apr 17 23:29:40.492527 systemd-logind[1466]: Removed session 19. Apr 17 23:29:42.173964 kubelet[2529]: I0417 23:29:42.173391 2529 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 17 23:29:45.496386 systemd[1]: Started sshd@19-159.69.127.159:22-50.85.169.122:57258.service - OpenSSH per-connection server daemon (50.85.169.122:57258). Apr 17 23:29:45.611955 sshd[5998]: Accepted publickey for core from 50.85.169.122 port 57258 ssh2: RSA SHA256:VfypDX1RTsDok1DcKRgqFkknflSVDpDNB07R6ghJc68 Apr 17 23:29:45.614442 sshd[5998]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:29:45.620989 systemd-logind[1466]: New session 20 of user core. Apr 17 23:29:45.627377 systemd[1]: Started session-20.scope - Session 20 of User core. Apr 17 23:29:45.806111 sshd[5998]: pam_unix(sshd:session): session closed for user core Apr 17 23:29:45.812905 systemd[1]: sshd@19-159.69.127.159:22-50.85.169.122:57258.service: Deactivated successfully. Apr 17 23:29:45.818005 systemd[1]: session-20.scope: Deactivated successfully. Apr 17 23:29:45.820732 systemd-logind[1466]: Session 20 logged out. Waiting for processes to exit. Apr 17 23:29:45.822920 systemd-logind[1466]: Removed session 20. Apr 17 23:29:50.842443 systemd[1]: Started sshd@20-159.69.127.159:22-50.85.169.122:57792.service - OpenSSH per-connection server daemon (50.85.169.122:57792). Apr 17 23:29:50.967280 sshd[6017]: Accepted publickey for core from 50.85.169.122 port 57792 ssh2: RSA SHA256:VfypDX1RTsDok1DcKRgqFkknflSVDpDNB07R6ghJc68 Apr 17 23:29:50.970478 sshd[6017]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:29:50.975970 systemd-logind[1466]: New session 21 of user core. Apr 17 23:29:50.983122 systemd[1]: Started session-21.scope - Session 21 of User core. Apr 17 23:29:51.158994 sshd[6017]: pam_unix(sshd:session): session closed for user core Apr 17 23:29:51.166175 systemd[1]: sshd@20-159.69.127.159:22-50.85.169.122:57792.service: Deactivated successfully. Apr 17 23:29:51.171790 systemd[1]: session-21.scope: Deactivated successfully. Apr 17 23:29:51.174274 systemd-logind[1466]: Session 21 logged out. Waiting for processes to exit. Apr 17 23:29:51.175703 systemd-logind[1466]: Removed session 21. Apr 17 23:30:05.884349 systemd[1]: cri-containerd-402b2e66f0cb26ef411167b8504b5aa63838b952839f47bdebd3a6f2a05c907c.scope: Deactivated successfully. Apr 17 23:30:05.884704 systemd[1]: cri-containerd-402b2e66f0cb26ef411167b8504b5aa63838b952839f47bdebd3a6f2a05c907c.scope: Consumed 16.916s CPU time. Apr 17 23:30:05.906284 containerd[1486]: time="2026-04-17T23:30:05.906017879Z" level=info msg="shim disconnected" id=402b2e66f0cb26ef411167b8504b5aa63838b952839f47bdebd3a6f2a05c907c namespace=k8s.io Apr 17 23:30:05.906284 containerd[1486]: time="2026-04-17T23:30:05.906075080Z" level=warning msg="cleaning up after shim disconnected" id=402b2e66f0cb26ef411167b8504b5aa63838b952839f47bdebd3a6f2a05c907c namespace=k8s.io Apr 17 23:30:05.906284 containerd[1486]: time="2026-04-17T23:30:05.906084120Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 17 23:30:05.912382 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-402b2e66f0cb26ef411167b8504b5aa63838b952839f47bdebd3a6f2a05c907c-rootfs.mount: Deactivated successfully. Apr 17 23:30:06.141175 kubelet[2529]: E0417 23:30:06.141036 2529 controller.go:195] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:54338->10.0.0.2:2379: read: connection timed out" Apr 17 23:30:06.338273 systemd[1]: cri-containerd-9b1575d46b76d0f2ac26a9a634640796de5952738932448a31980a6216ae03b3.scope: Deactivated successfully. Apr 17 23:30:06.338999 systemd[1]: cri-containerd-9b1575d46b76d0f2ac26a9a634640796de5952738932448a31980a6216ae03b3.scope: Consumed 3.114s CPU time, 16.4M memory peak, 0B memory swap peak. Apr 17 23:30:06.368641 containerd[1486]: time="2026-04-17T23:30:06.368305294Z" level=info msg="shim disconnected" id=9b1575d46b76d0f2ac26a9a634640796de5952738932448a31980a6216ae03b3 namespace=k8s.io Apr 17 23:30:06.368641 containerd[1486]: time="2026-04-17T23:30:06.368380055Z" level=warning msg="cleaning up after shim disconnected" id=9b1575d46b76d0f2ac26a9a634640796de5952738932448a31980a6216ae03b3 namespace=k8s.io Apr 17 23:30:06.368641 containerd[1486]: time="2026-04-17T23:30:06.368388175Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 17 23:30:06.373247 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9b1575d46b76d0f2ac26a9a634640796de5952738932448a31980a6216ae03b3-rootfs.mount: Deactivated successfully. Apr 17 23:30:06.773377 kubelet[2529]: I0417 23:30:06.773109 2529 scope.go:117] "RemoveContainer" containerID="402b2e66f0cb26ef411167b8504b5aa63838b952839f47bdebd3a6f2a05c907c" Apr 17 23:30:06.773746 kubelet[2529]: I0417 23:30:06.773729 2529 scope.go:117] "RemoveContainer" containerID="9b1575d46b76d0f2ac26a9a634640796de5952738932448a31980a6216ae03b3" Apr 17 23:30:06.778889 containerd[1486]: time="2026-04-17T23:30:06.778460297Z" level=info msg="CreateContainer within sandbox \"0c524d87e50885b2baafedfe916053c54fc9f355e4ae516f8a040b9f13ef44bc\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Apr 17 23:30:06.780142 containerd[1486]: time="2026-04-17T23:30:06.780096545Z" level=info msg="CreateContainer within sandbox \"6a744d73c96a8486765210d81440e2b11a23f286d6ac9256fc653390e7f524ff\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Apr 17 23:30:06.800960 containerd[1486]: time="2026-04-17T23:30:06.800818480Z" level=info msg="CreateContainer within sandbox \"0c524d87e50885b2baafedfe916053c54fc9f355e4ae516f8a040b9f13ef44bc\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"8437cf71f98ed67c7ee8c4ea69421c0593ae8277e109257ab7fd81f43315b52c\"" Apr 17 23:30:06.802197 containerd[1486]: time="2026-04-17T23:30:06.802133166Z" level=info msg="StartContainer for \"8437cf71f98ed67c7ee8c4ea69421c0593ae8277e109257ab7fd81f43315b52c\"" Apr 17 23:30:06.806218 containerd[1486]: time="2026-04-17T23:30:06.806157705Z" level=info msg="CreateContainer within sandbox \"6a744d73c96a8486765210d81440e2b11a23f286d6ac9256fc653390e7f524ff\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"d87367bb7a86be61189152e5b1cb5182b7cf052ad9d5c3df341d952897c6b032\"" Apr 17 23:30:06.807559 containerd[1486]: time="2026-04-17T23:30:06.807381190Z" level=info msg="StartContainer for \"d87367bb7a86be61189152e5b1cb5182b7cf052ad9d5c3df341d952897c6b032\"" Apr 17 23:30:06.842180 systemd[1]: Started cri-containerd-8437cf71f98ed67c7ee8c4ea69421c0593ae8277e109257ab7fd81f43315b52c.scope - libcontainer container 8437cf71f98ed67c7ee8c4ea69421c0593ae8277e109257ab7fd81f43315b52c. Apr 17 23:30:06.861125 systemd[1]: Started cri-containerd-d87367bb7a86be61189152e5b1cb5182b7cf052ad9d5c3df341d952897c6b032.scope - libcontainer container d87367bb7a86be61189152e5b1cb5182b7cf052ad9d5c3df341d952897c6b032. Apr 17 23:30:06.907259 containerd[1486]: time="2026-04-17T23:30:06.906486805Z" level=info msg="StartContainer for \"8437cf71f98ed67c7ee8c4ea69421c0593ae8277e109257ab7fd81f43315b52c\" returns successfully" Apr 17 23:30:06.918960 containerd[1486]: time="2026-04-17T23:30:06.918083379Z" level=info msg="StartContainer for \"d87367bb7a86be61189152e5b1cb5182b7cf052ad9d5c3df341d952897c6b032\" returns successfully" Apr 17 23:30:08.889406 kubelet[2529]: E0417 23:30:08.888795 2529 event.go:359] "Server rejected event (will not retry!)" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:53994->10.0.0.2:2379: read: connection timed out" event="&Event{ObjectMeta:{kube-apiserver-ci-4081-3-6-n-6417c65d59.18a748c7c3dafa86 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-ci-4081-3-6-n-6417c65d59,UID:97aca4857ce9d5f7e69afd4065b4364e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Readiness probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:ci-4081-3-6-n-6417c65d59,},FirstTimestamp:2026-04-17 23:30:00.842549894 +0000 UTC m=+105.915228225,LastTimestamp:2026-04-17 23:30:00.842549894 +0000 UTC m=+105.915228225,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-6-n-6417c65d59,}"