Apr 13 19:22:39.911047 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Apr 13 19:22:39.911073 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Mon Apr 13 18:04:44 -00 2026 Apr 13 19:22:39.911084 kernel: KASLR enabled Apr 13 19:22:39.911090 kernel: efi: EFI v2.7 by Ubuntu distribution of EDK II Apr 13 19:22:39.911095 kernel: efi: SMBIOS 3.0=0x139ed0000 MEMATTR=0x1390c1018 ACPI 2.0=0x136760018 RNG=0x13676e918 MEMRESERVE=0x136b43d18 Apr 13 19:22:39.911101 kernel: random: crng init done Apr 13 19:22:39.911108 kernel: ACPI: Early table checksum verification disabled Apr 13 19:22:39.911114 kernel: ACPI: RSDP 0x0000000136760018 000024 (v02 BOCHS ) Apr 13 19:22:39.911120 kernel: ACPI: XSDT 0x000000013676FE98 00006C (v01 BOCHS BXPC 00000001 01000013) Apr 13 19:22:39.911128 kernel: ACPI: FACP 0x000000013676FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Apr 13 19:22:39.911134 kernel: ACPI: DSDT 0x0000000136767518 001468 (v02 BOCHS BXPC 00000001 BXPC 00000001) Apr 13 19:22:39.911140 kernel: ACPI: APIC 0x000000013676FC18 000108 (v04 BOCHS BXPC 00000001 BXPC 00000001) Apr 13 19:22:39.911146 kernel: ACPI: PPTT 0x000000013676FD98 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Apr 13 19:22:39.911152 kernel: ACPI: GTDT 0x000000013676D898 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Apr 13 19:22:39.911162 kernel: ACPI: MCFG 0x000000013676FF98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 13 19:22:39.911171 kernel: ACPI: SPCR 0x000000013676E818 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Apr 13 19:22:39.911178 kernel: ACPI: DBG2 0x000000013676E898 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Apr 13 19:22:39.911186 kernel: ACPI: IORT 0x000000013676E418 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Apr 13 19:22:39.911193 kernel: ACPI: BGRT 0x000000013676E798 000038 (v01 INTEL EDK2 00000002 01000013) Apr 13 19:22:39.911200 kernel: ACPI: SPCR: console: pl011,mmio32,0x9000000,9600 Apr 13 19:22:39.911207 kernel: NUMA: Failed to initialise from firmware Apr 13 19:22:39.911215 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x0000000139ffffff] Apr 13 19:22:39.911223 kernel: NUMA: NODE_DATA [mem 0x13966e800-0x139673fff] Apr 13 19:22:39.911230 kernel: Zone ranges: Apr 13 19:22:39.911237 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Apr 13 19:22:39.911246 kernel: DMA32 empty Apr 13 19:22:39.911254 kernel: Normal [mem 0x0000000100000000-0x0000000139ffffff] Apr 13 19:22:39.911261 kernel: Movable zone start for each node Apr 13 19:22:39.911268 kernel: Early memory node ranges Apr 13 19:22:39.911276 kernel: node 0: [mem 0x0000000040000000-0x000000013676ffff] Apr 13 19:22:39.911283 kernel: node 0: [mem 0x0000000136770000-0x0000000136b3ffff] Apr 13 19:22:39.911291 kernel: node 0: [mem 0x0000000136b40000-0x0000000139e1ffff] Apr 13 19:22:39.911298 kernel: node 0: [mem 0x0000000139e20000-0x0000000139eaffff] Apr 13 19:22:39.911305 kernel: node 0: [mem 0x0000000139eb0000-0x0000000139ebffff] Apr 13 19:22:39.911313 kernel: node 0: [mem 0x0000000139ec0000-0x0000000139fdffff] Apr 13 19:22:39.911320 kernel: node 0: [mem 0x0000000139fe0000-0x0000000139ffffff] Apr 13 19:22:39.911327 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x0000000139ffffff] Apr 13 19:22:39.911336 kernel: On node 0, zone Normal: 24576 pages in unavailable ranges Apr 13 19:22:39.911343 kernel: psci: probing for conduit method from ACPI. Apr 13 19:22:39.911351 kernel: psci: PSCIv1.1 detected in firmware. Apr 13 19:22:39.911362 kernel: psci: Using standard PSCI v0.2 function IDs Apr 13 19:22:39.911370 kernel: psci: Trusted OS migration not required Apr 13 19:22:39.911378 kernel: psci: SMC Calling Convention v1.1 Apr 13 19:22:39.911388 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Apr 13 19:22:39.911395 kernel: percpu: Embedded 30 pages/cpu s85736 r8192 d28952 u122880 Apr 13 19:22:39.911403 kernel: pcpu-alloc: s85736 r8192 d28952 u122880 alloc=30*4096 Apr 13 19:22:39.911410 kernel: pcpu-alloc: [0] 0 [0] 1 Apr 13 19:22:39.911417 kernel: Detected PIPT I-cache on CPU0 Apr 13 19:22:39.911424 kernel: CPU features: detected: GIC system register CPU interface Apr 13 19:22:39.911430 kernel: CPU features: detected: Hardware dirty bit management Apr 13 19:22:39.911437 kernel: CPU features: detected: Spectre-v4 Apr 13 19:22:39.911444 kernel: CPU features: detected: Spectre-BHB Apr 13 19:22:39.911479 kernel: CPU features: kernel page table isolation forced ON by KASLR Apr 13 19:22:39.911489 kernel: CPU features: detected: Kernel page table isolation (KPTI) Apr 13 19:22:39.911496 kernel: CPU features: detected: ARM erratum 1418040 Apr 13 19:22:39.911503 kernel: CPU features: detected: SSBS not fully self-synchronizing Apr 13 19:22:39.911509 kernel: alternatives: applying boot alternatives Apr 13 19:22:39.911517 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=06a955818c1cb85215c4fc3bbca340081bcaba3fb92fe20a32668615ff23854b Apr 13 19:22:39.911525 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Apr 13 19:22:39.911532 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 13 19:22:39.911538 kernel: Fallback order for Node 0: 0 Apr 13 19:22:39.911551 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1008000 Apr 13 19:22:39.911561 kernel: Policy zone: Normal Apr 13 19:22:39.911570 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 13 19:22:39.911579 kernel: software IO TLB: area num 2. Apr 13 19:22:39.911587 kernel: software IO TLB: mapped [mem 0x00000000fbfff000-0x00000000fffff000] (64MB) Apr 13 19:22:39.911597 kernel: Memory: 3882812K/4096000K available (10304K kernel code, 2180K rwdata, 8116K rodata, 39424K init, 897K bss, 213188K reserved, 0K cma-reserved) Apr 13 19:22:39.911604 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Apr 13 19:22:39.911613 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 13 19:22:39.911621 kernel: rcu: RCU event tracing is enabled. Apr 13 19:22:39.911629 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Apr 13 19:22:39.911637 kernel: Trampoline variant of Tasks RCU enabled. Apr 13 19:22:39.911645 kernel: Tracing variant of Tasks RCU enabled. Apr 13 19:22:39.911653 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 13 19:22:39.911661 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Apr 13 19:22:39.911682 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Apr 13 19:22:39.911694 kernel: GICv3: 256 SPIs implemented Apr 13 19:22:39.911721 kernel: GICv3: 0 Extended SPIs implemented Apr 13 19:22:39.911729 kernel: Root IRQ handler: gic_handle_irq Apr 13 19:22:39.911737 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Apr 13 19:22:39.911745 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Apr 13 19:22:39.911753 kernel: ITS [mem 0x08080000-0x0809ffff] Apr 13 19:22:39.911761 kernel: ITS@0x0000000008080000: allocated 8192 Devices @1000c0000 (indirect, esz 8, psz 64K, shr 1) Apr 13 19:22:39.911768 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @1000d0000 (flat, esz 8, psz 64K, shr 1) Apr 13 19:22:39.911776 kernel: GICv3: using LPI property table @0x00000001000e0000 Apr 13 19:22:39.911784 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000001000f0000 Apr 13 19:22:39.911792 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 13 19:22:39.911801 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Apr 13 19:22:39.911808 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Apr 13 19:22:39.911815 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Apr 13 19:22:39.911822 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Apr 13 19:22:39.911829 kernel: Console: colour dummy device 80x25 Apr 13 19:22:39.911836 kernel: ACPI: Core revision 20230628 Apr 13 19:22:39.911843 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Apr 13 19:22:39.911850 kernel: pid_max: default: 32768 minimum: 301 Apr 13 19:22:39.911857 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Apr 13 19:22:39.911864 kernel: landlock: Up and running. Apr 13 19:22:39.911873 kernel: SELinux: Initializing. Apr 13 19:22:39.911880 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 13 19:22:39.911887 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 13 19:22:39.911894 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 13 19:22:39.911901 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 13 19:22:39.911908 kernel: rcu: Hierarchical SRCU implementation. Apr 13 19:22:39.911916 kernel: rcu: Max phase no-delay instances is 400. Apr 13 19:22:39.911923 kernel: Platform MSI: ITS@0x8080000 domain created Apr 13 19:22:39.911929 kernel: PCI/MSI: ITS@0x8080000 domain created Apr 13 19:22:39.911938 kernel: Remapping and enabling EFI services. Apr 13 19:22:39.911945 kernel: smp: Bringing up secondary CPUs ... Apr 13 19:22:39.911952 kernel: Detected PIPT I-cache on CPU1 Apr 13 19:22:39.911960 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Apr 13 19:22:39.911967 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000100100000 Apr 13 19:22:39.911974 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Apr 13 19:22:39.911980 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Apr 13 19:22:39.911987 kernel: smp: Brought up 1 node, 2 CPUs Apr 13 19:22:39.911995 kernel: SMP: Total of 2 processors activated. Apr 13 19:22:39.912002 kernel: CPU features: detected: 32-bit EL0 Support Apr 13 19:22:39.912010 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Apr 13 19:22:39.912018 kernel: CPU features: detected: Common not Private translations Apr 13 19:22:39.912030 kernel: CPU features: detected: CRC32 instructions Apr 13 19:22:39.912039 kernel: CPU features: detected: Enhanced Virtualization Traps Apr 13 19:22:39.912046 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Apr 13 19:22:39.912054 kernel: CPU features: detected: LSE atomic instructions Apr 13 19:22:39.912061 kernel: CPU features: detected: Privileged Access Never Apr 13 19:22:39.912069 kernel: CPU features: detected: RAS Extension Support Apr 13 19:22:39.912078 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Apr 13 19:22:39.912085 kernel: CPU: All CPU(s) started at EL1 Apr 13 19:22:39.912092 kernel: alternatives: applying system-wide alternatives Apr 13 19:22:39.912100 kernel: devtmpfs: initialized Apr 13 19:22:39.912107 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 13 19:22:39.912115 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Apr 13 19:22:39.912122 kernel: pinctrl core: initialized pinctrl subsystem Apr 13 19:22:39.912130 kernel: SMBIOS 3.0.0 present. Apr 13 19:22:39.912139 kernel: DMI: Hetzner vServer/KVM Virtual Machine, BIOS 20171111 11/11/2017 Apr 13 19:22:39.912146 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 13 19:22:39.912153 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Apr 13 19:22:39.912161 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Apr 13 19:22:39.912168 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Apr 13 19:22:39.912176 kernel: audit: initializing netlink subsys (disabled) Apr 13 19:22:39.912183 kernel: audit: type=2000 audit(0.012:1): state=initialized audit_enabled=0 res=1 Apr 13 19:22:39.912190 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 13 19:22:39.912198 kernel: cpuidle: using governor menu Apr 13 19:22:39.912206 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Apr 13 19:22:39.912214 kernel: ASID allocator initialised with 32768 entries Apr 13 19:22:39.912221 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 13 19:22:39.912229 kernel: Serial: AMBA PL011 UART driver Apr 13 19:22:39.912236 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Apr 13 19:22:39.912244 kernel: Modules: 0 pages in range for non-PLT usage Apr 13 19:22:39.912251 kernel: Modules: 509008 pages in range for PLT usage Apr 13 19:22:39.912259 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 13 19:22:39.912266 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Apr 13 19:22:39.912275 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Apr 13 19:22:39.912282 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Apr 13 19:22:39.912290 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 13 19:22:39.912298 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Apr 13 19:22:39.912305 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Apr 13 19:22:39.912312 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Apr 13 19:22:39.912320 kernel: ACPI: Added _OSI(Module Device) Apr 13 19:22:39.912327 kernel: ACPI: Added _OSI(Processor Device) Apr 13 19:22:39.912334 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 13 19:22:39.912344 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Apr 13 19:22:39.912351 kernel: ACPI: Interpreter enabled Apr 13 19:22:39.912358 kernel: ACPI: Using GIC for interrupt routing Apr 13 19:22:39.912366 kernel: ACPI: MCFG table detected, 1 entries Apr 13 19:22:39.912373 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Apr 13 19:22:39.912380 kernel: printk: console [ttyAMA0] enabled Apr 13 19:22:39.912388 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Apr 13 19:22:39.912596 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Apr 13 19:22:39.912745 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Apr 13 19:22:39.912819 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Apr 13 19:22:39.912884 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Apr 13 19:22:39.912948 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Apr 13 19:22:39.912958 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Apr 13 19:22:39.912965 kernel: PCI host bridge to bus 0000:00 Apr 13 19:22:39.913050 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Apr 13 19:22:39.913126 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Apr 13 19:22:39.913206 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Apr 13 19:22:39.913272 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Apr 13 19:22:39.913367 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Apr 13 19:22:39.913444 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x038000 Apr 13 19:22:39.913556 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x11289000-0x11289fff] Apr 13 19:22:39.913627 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000600000-0x8000603fff 64bit pref] Apr 13 19:22:39.913731 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Apr 13 19:22:39.913803 kernel: pci 0000:00:02.0: reg 0x10: [mem 0x11288000-0x11288fff] Apr 13 19:22:39.913879 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Apr 13 19:22:39.913946 kernel: pci 0000:00:02.1: reg 0x10: [mem 0x11287000-0x11287fff] Apr 13 19:22:39.914020 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Apr 13 19:22:39.914086 kernel: pci 0000:00:02.2: reg 0x10: [mem 0x11286000-0x11286fff] Apr 13 19:22:39.914163 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Apr 13 19:22:39.914229 kernel: pci 0000:00:02.3: reg 0x10: [mem 0x11285000-0x11285fff] Apr 13 19:22:39.914301 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Apr 13 19:22:39.914366 kernel: pci 0000:00:02.4: reg 0x10: [mem 0x11284000-0x11284fff] Apr 13 19:22:39.914564 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Apr 13 19:22:39.914644 kernel: pci 0000:00:02.5: reg 0x10: [mem 0x11283000-0x11283fff] Apr 13 19:22:39.914797 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Apr 13 19:22:39.914881 kernel: pci 0000:00:02.6: reg 0x10: [mem 0x11282000-0x11282fff] Apr 13 19:22:39.914957 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Apr 13 19:22:39.915024 kernel: pci 0000:00:02.7: reg 0x10: [mem 0x11281000-0x11281fff] Apr 13 19:22:39.915097 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 Apr 13 19:22:39.915163 kernel: pci 0000:00:03.0: reg 0x10: [mem 0x11280000-0x11280fff] Apr 13 19:22:39.915240 kernel: pci 0000:00:04.0: [1b36:0002] type 00 class 0x070002 Apr 13 19:22:39.915307 kernel: pci 0000:00:04.0: reg 0x10: [io 0x0000-0x0007] Apr 13 19:22:39.915385 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 Apr 13 19:22:39.916210 kernel: pci 0000:01:00.0: reg 0x14: [mem 0x11000000-0x11000fff] Apr 13 19:22:39.916356 kernel: pci 0000:01:00.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Apr 13 19:22:39.916439 kernel: pci 0000:01:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Apr 13 19:22:39.916598 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 Apr 13 19:22:39.916698 kernel: pci 0000:02:00.0: reg 0x10: [mem 0x10e00000-0x10e03fff 64bit] Apr 13 19:22:39.917265 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 Apr 13 19:22:39.917352 kernel: pci 0000:03:00.0: reg 0x14: [mem 0x10c00000-0x10c00fff] Apr 13 19:22:39.917425 kernel: pci 0000:03:00.0: reg 0x20: [mem 0x8000100000-0x8000103fff 64bit pref] Apr 13 19:22:39.919255 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 Apr 13 19:22:39.919382 kernel: pci 0000:04:00.0: reg 0x20: [mem 0x8000200000-0x8000203fff 64bit pref] Apr 13 19:22:39.919657 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 Apr 13 19:22:39.919774 kernel: pci 0000:05:00.0: reg 0x14: [mem 0x10800000-0x10800fff] Apr 13 19:22:39.919856 kernel: pci 0000:05:00.0: reg 0x20: [mem 0x8000300000-0x8000303fff 64bit pref] Apr 13 19:22:39.919937 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 Apr 13 19:22:39.920007 kernel: pci 0000:06:00.0: reg 0x14: [mem 0x10600000-0x10600fff] Apr 13 19:22:39.920075 kernel: pci 0000:06:00.0: reg 0x20: [mem 0x8000400000-0x8000403fff 64bit pref] Apr 13 19:22:39.920160 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 Apr 13 19:22:39.920230 kernel: pci 0000:07:00.0: reg 0x14: [mem 0x10400000-0x10400fff] Apr 13 19:22:39.920298 kernel: pci 0000:07:00.0: reg 0x20: [mem 0x8000500000-0x8000503fff 64bit pref] Apr 13 19:22:39.920365 kernel: pci 0000:07:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Apr 13 19:22:39.920438 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Apr 13 19:22:39.920532 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 01] add_size 100000 add_align 100000 Apr 13 19:22:39.920612 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff] to [bus 01] add_size 100000 add_align 100000 Apr 13 19:22:39.920818 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Apr 13 19:22:39.920895 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Apr 13 19:22:39.920961 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x001fffff] to [bus 02] add_size 100000 add_align 100000 Apr 13 19:22:39.921031 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Apr 13 19:22:39.921099 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 03] add_size 100000 add_align 100000 Apr 13 19:22:39.921164 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff] to [bus 03] add_size 100000 add_align 100000 Apr 13 19:22:39.921236 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Apr 13 19:22:39.921302 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 04] add_size 100000 add_align 100000 Apr 13 19:22:39.921375 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 Apr 13 19:22:39.922553 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Apr 13 19:22:39.922734 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 05] add_size 100000 add_align 100000 Apr 13 19:22:39.922811 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff] to [bus 05] add_size 100000 add_align 100000 Apr 13 19:22:39.922888 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Apr 13 19:22:39.922954 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 06] add_size 100000 add_align 100000 Apr 13 19:22:39.923019 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff] to [bus 06] add_size 100000 add_align 100000 Apr 13 19:22:39.923102 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Apr 13 19:22:39.923169 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 07] add_size 100000 add_align 100000 Apr 13 19:22:39.923253 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff] to [bus 07] add_size 100000 add_align 100000 Apr 13 19:22:39.923327 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Apr 13 19:22:39.923393 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 08] add_size 200000 add_align 100000 Apr 13 19:22:39.925531 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff] to [bus 08] add_size 200000 add_align 100000 Apr 13 19:22:39.925755 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Apr 13 19:22:39.925836 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 09] add_size 200000 add_align 100000 Apr 13 19:22:39.925913 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 09] add_size 200000 add_align 100000 Apr 13 19:22:39.925985 kernel: pci 0000:00:02.0: BAR 14: assigned [mem 0x10000000-0x101fffff] Apr 13 19:22:39.926052 kernel: pci 0000:00:02.0: BAR 15: assigned [mem 0x8000000000-0x80001fffff 64bit pref] Apr 13 19:22:39.926133 kernel: pci 0000:00:02.1: BAR 14: assigned [mem 0x10200000-0x103fffff] Apr 13 19:22:39.926209 kernel: pci 0000:00:02.1: BAR 15: assigned [mem 0x8000200000-0x80003fffff 64bit pref] Apr 13 19:22:39.926291 kernel: pci 0000:00:02.2: BAR 14: assigned [mem 0x10400000-0x105fffff] Apr 13 19:22:39.926371 kernel: pci 0000:00:02.2: BAR 15: assigned [mem 0x8000400000-0x80005fffff 64bit pref] Apr 13 19:22:39.926471 kernel: pci 0000:00:02.3: BAR 14: assigned [mem 0x10600000-0x107fffff] Apr 13 19:22:39.926560 kernel: pci 0000:00:02.3: BAR 15: assigned [mem 0x8000600000-0x80007fffff 64bit pref] Apr 13 19:22:39.926641 kernel: pci 0000:00:02.4: BAR 14: assigned [mem 0x10800000-0x109fffff] Apr 13 19:22:39.926735 kernel: pci 0000:00:02.4: BAR 15: assigned [mem 0x8000800000-0x80009fffff 64bit pref] Apr 13 19:22:39.926820 kernel: pci 0000:00:02.5: BAR 14: assigned [mem 0x10a00000-0x10bfffff] Apr 13 19:22:39.926898 kernel: pci 0000:00:02.5: BAR 15: assigned [mem 0x8000a00000-0x8000bfffff 64bit pref] Apr 13 19:22:39.926981 kernel: pci 0000:00:02.6: BAR 14: assigned [mem 0x10c00000-0x10dfffff] Apr 13 19:22:39.927051 kernel: pci 0000:00:02.6: BAR 15: assigned [mem 0x8000c00000-0x8000dfffff 64bit pref] Apr 13 19:22:39.927122 kernel: pci 0000:00:02.7: BAR 14: assigned [mem 0x10e00000-0x10ffffff] Apr 13 19:22:39.927200 kernel: pci 0000:00:02.7: BAR 15: assigned [mem 0x8000e00000-0x8000ffffff 64bit pref] Apr 13 19:22:39.927277 kernel: pci 0000:00:03.0: BAR 14: assigned [mem 0x11000000-0x111fffff] Apr 13 19:22:39.927375 kernel: pci 0000:00:03.0: BAR 15: assigned [mem 0x8001000000-0x80011fffff 64bit pref] Apr 13 19:22:39.927469 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8001200000-0x8001203fff 64bit pref] Apr 13 19:22:39.927558 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x11200000-0x11200fff] Apr 13 19:22:39.927638 kernel: pci 0000:00:02.0: BAR 0: assigned [mem 0x11201000-0x11201fff] Apr 13 19:22:39.927740 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] Apr 13 19:22:39.927824 kernel: pci 0000:00:02.1: BAR 0: assigned [mem 0x11202000-0x11202fff] Apr 13 19:22:39.927901 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] Apr 13 19:22:39.927977 kernel: pci 0000:00:02.2: BAR 0: assigned [mem 0x11203000-0x11203fff] Apr 13 19:22:39.928050 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] Apr 13 19:22:39.928127 kernel: pci 0000:00:02.3: BAR 0: assigned [mem 0x11204000-0x11204fff] Apr 13 19:22:39.928202 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] Apr 13 19:22:39.928285 kernel: pci 0000:00:02.4: BAR 0: assigned [mem 0x11205000-0x11205fff] Apr 13 19:22:39.928353 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] Apr 13 19:22:39.928419 kernel: pci 0000:00:02.5: BAR 0: assigned [mem 0x11206000-0x11206fff] Apr 13 19:22:39.928517 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] Apr 13 19:22:39.928600 kernel: pci 0000:00:02.6: BAR 0: assigned [mem 0x11207000-0x11207fff] Apr 13 19:22:39.928666 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] Apr 13 19:22:39.928788 kernel: pci 0000:00:02.7: BAR 0: assigned [mem 0x11208000-0x11208fff] Apr 13 19:22:39.928876 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] Apr 13 19:22:39.928947 kernel: pci 0000:00:03.0: BAR 0: assigned [mem 0x11209000-0x11209fff] Apr 13 19:22:39.929013 kernel: pci 0000:00:03.0: BAR 13: assigned [io 0x9000-0x9fff] Apr 13 19:22:39.929084 kernel: pci 0000:00:04.0: BAR 0: assigned [io 0xa000-0xa007] Apr 13 19:22:39.929176 kernel: pci 0000:01:00.0: BAR 6: assigned [mem 0x10000000-0x1007ffff pref] Apr 13 19:22:39.929350 kernel: pci 0000:01:00.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Apr 13 19:22:39.929432 kernel: pci 0000:01:00.0: BAR 1: assigned [mem 0x10080000-0x10080fff] Apr 13 19:22:39.930259 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Apr 13 19:22:39.930349 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Apr 13 19:22:39.930416 kernel: pci 0000:00:02.0: bridge window [mem 0x10000000-0x101fffff] Apr 13 19:22:39.930852 kernel: pci 0000:00:02.0: bridge window [mem 0x8000000000-0x80001fffff 64bit pref] Apr 13 19:22:39.930944 kernel: pci 0000:02:00.0: BAR 0: assigned [mem 0x10200000-0x10203fff 64bit] Apr 13 19:22:39.931022 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Apr 13 19:22:39.931089 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Apr 13 19:22:39.931155 kernel: pci 0000:00:02.1: bridge window [mem 0x10200000-0x103fffff] Apr 13 19:22:39.931228 kernel: pci 0000:00:02.1: bridge window [mem 0x8000200000-0x80003fffff 64bit pref] Apr 13 19:22:39.931309 kernel: pci 0000:03:00.0: BAR 4: assigned [mem 0x8000400000-0x8000403fff 64bit pref] Apr 13 19:22:39.931377 kernel: pci 0000:03:00.0: BAR 1: assigned [mem 0x10400000-0x10400fff] Apr 13 19:22:39.931446 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Apr 13 19:22:39.931879 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Apr 13 19:22:39.931959 kernel: pci 0000:00:02.2: bridge window [mem 0x10400000-0x105fffff] Apr 13 19:22:39.932024 kernel: pci 0000:00:02.2: bridge window [mem 0x8000400000-0x80005fffff 64bit pref] Apr 13 19:22:39.932100 kernel: pci 0000:04:00.0: BAR 4: assigned [mem 0x8000600000-0x8000603fff 64bit pref] Apr 13 19:22:39.932169 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Apr 13 19:22:39.932233 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Apr 13 19:22:39.932298 kernel: pci 0000:00:02.3: bridge window [mem 0x10600000-0x107fffff] Apr 13 19:22:39.932362 kernel: pci 0000:00:02.3: bridge window [mem 0x8000600000-0x80007fffff 64bit pref] Apr 13 19:22:39.932435 kernel: pci 0000:05:00.0: BAR 4: assigned [mem 0x8000800000-0x8000803fff 64bit pref] Apr 13 19:22:39.932544 kernel: pci 0000:05:00.0: BAR 1: assigned [mem 0x10800000-0x10800fff] Apr 13 19:22:39.932633 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Apr 13 19:22:39.932723 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Apr 13 19:22:39.932801 kernel: pci 0000:00:02.4: bridge window [mem 0x10800000-0x109fffff] Apr 13 19:22:39.932869 kernel: pci 0000:00:02.4: bridge window [mem 0x8000800000-0x80009fffff 64bit pref] Apr 13 19:22:39.932945 kernel: pci 0000:06:00.0: BAR 4: assigned [mem 0x8000a00000-0x8000a03fff 64bit pref] Apr 13 19:22:39.933031 kernel: pci 0000:06:00.0: BAR 1: assigned [mem 0x10a00000-0x10a00fff] Apr 13 19:22:39.933103 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Apr 13 19:22:39.933176 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Apr 13 19:22:39.933242 kernel: pci 0000:00:02.5: bridge window [mem 0x10a00000-0x10bfffff] Apr 13 19:22:39.933307 kernel: pci 0000:00:02.5: bridge window [mem 0x8000a00000-0x8000bfffff 64bit pref] Apr 13 19:22:39.933381 kernel: pci 0000:07:00.0: BAR 6: assigned [mem 0x10c00000-0x10c7ffff pref] Apr 13 19:22:39.933450 kernel: pci 0000:07:00.0: BAR 4: assigned [mem 0x8000c00000-0x8000c03fff 64bit pref] Apr 13 19:22:39.935656 kernel: pci 0000:07:00.0: BAR 1: assigned [mem 0x10c80000-0x10c80fff] Apr 13 19:22:39.935784 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Apr 13 19:22:39.935883 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Apr 13 19:22:39.935993 kernel: pci 0000:00:02.6: bridge window [mem 0x10c00000-0x10dfffff] Apr 13 19:22:39.936064 kernel: pci 0000:00:02.6: bridge window [mem 0x8000c00000-0x8000dfffff 64bit pref] Apr 13 19:22:39.936193 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Apr 13 19:22:39.936269 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Apr 13 19:22:39.936335 kernel: pci 0000:00:02.7: bridge window [mem 0x10e00000-0x10ffffff] Apr 13 19:22:39.936400 kernel: pci 0000:00:02.7: bridge window [mem 0x8000e00000-0x8000ffffff 64bit pref] Apr 13 19:22:39.937562 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Apr 13 19:22:39.937685 kernel: pci 0000:00:03.0: bridge window [io 0x9000-0x9fff] Apr 13 19:22:39.937778 kernel: pci 0000:00:03.0: bridge window [mem 0x11000000-0x111fffff] Apr 13 19:22:39.937845 kernel: pci 0000:00:03.0: bridge window [mem 0x8001000000-0x80011fffff 64bit pref] Apr 13 19:22:39.937915 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Apr 13 19:22:39.937977 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Apr 13 19:22:39.938037 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Apr 13 19:22:39.938113 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Apr 13 19:22:39.938177 kernel: pci_bus 0000:01: resource 1 [mem 0x10000000-0x101fffff] Apr 13 19:22:39.938253 kernel: pci_bus 0000:01: resource 2 [mem 0x8000000000-0x80001fffff 64bit pref] Apr 13 19:22:39.938326 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x2fff] Apr 13 19:22:39.938388 kernel: pci_bus 0000:02: resource 1 [mem 0x10200000-0x103fffff] Apr 13 19:22:39.938450 kernel: pci_bus 0000:02: resource 2 [mem 0x8000200000-0x80003fffff 64bit pref] Apr 13 19:22:39.939617 kernel: pci_bus 0000:03: resource 0 [io 0x3000-0x3fff] Apr 13 19:22:39.939708 kernel: pci_bus 0000:03: resource 1 [mem 0x10400000-0x105fffff] Apr 13 19:22:39.939787 kernel: pci_bus 0000:03: resource 2 [mem 0x8000400000-0x80005fffff 64bit pref] Apr 13 19:22:39.939858 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] Apr 13 19:22:39.939920 kernel: pci_bus 0000:04: resource 1 [mem 0x10600000-0x107fffff] Apr 13 19:22:39.939996 kernel: pci_bus 0000:04: resource 2 [mem 0x8000600000-0x80007fffff 64bit pref] Apr 13 19:22:39.940065 kernel: pci_bus 0000:05: resource 0 [io 0x5000-0x5fff] Apr 13 19:22:39.940126 kernel: pci_bus 0000:05: resource 1 [mem 0x10800000-0x109fffff] Apr 13 19:22:39.940186 kernel: pci_bus 0000:05: resource 2 [mem 0x8000800000-0x80009fffff 64bit pref] Apr 13 19:22:39.940262 kernel: pci_bus 0000:06: resource 0 [io 0x6000-0x6fff] Apr 13 19:22:39.940326 kernel: pci_bus 0000:06: resource 1 [mem 0x10a00000-0x10bfffff] Apr 13 19:22:39.940392 kernel: pci_bus 0000:06: resource 2 [mem 0x8000a00000-0x8000bfffff 64bit pref] Apr 13 19:22:39.940476 kernel: pci_bus 0000:07: resource 0 [io 0x7000-0x7fff] Apr 13 19:22:39.941734 kernel: pci_bus 0000:07: resource 1 [mem 0x10c00000-0x10dfffff] Apr 13 19:22:39.941802 kernel: pci_bus 0000:07: resource 2 [mem 0x8000c00000-0x8000dfffff 64bit pref] Apr 13 19:22:39.941872 kernel: pci_bus 0000:08: resource 0 [io 0x8000-0x8fff] Apr 13 19:22:39.941934 kernel: pci_bus 0000:08: resource 1 [mem 0x10e00000-0x10ffffff] Apr 13 19:22:39.942019 kernel: pci_bus 0000:08: resource 2 [mem 0x8000e00000-0x8000ffffff 64bit pref] Apr 13 19:22:39.942091 kernel: pci_bus 0000:09: resource 0 [io 0x9000-0x9fff] Apr 13 19:22:39.942153 kernel: pci_bus 0000:09: resource 1 [mem 0x11000000-0x111fffff] Apr 13 19:22:39.942219 kernel: pci_bus 0000:09: resource 2 [mem 0x8001000000-0x80011fffff 64bit pref] Apr 13 19:22:39.942229 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Apr 13 19:22:39.942237 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Apr 13 19:22:39.942245 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Apr 13 19:22:39.942254 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Apr 13 19:22:39.942262 kernel: iommu: Default domain type: Translated Apr 13 19:22:39.942273 kernel: iommu: DMA domain TLB invalidation policy: strict mode Apr 13 19:22:39.942281 kernel: efivars: Registered efivars operations Apr 13 19:22:39.942289 kernel: vgaarb: loaded Apr 13 19:22:39.942299 kernel: clocksource: Switched to clocksource arch_sys_counter Apr 13 19:22:39.942307 kernel: VFS: Disk quotas dquot_6.6.0 Apr 13 19:22:39.942315 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 13 19:22:39.942322 kernel: pnp: PnP ACPI init Apr 13 19:22:39.942399 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Apr 13 19:22:39.942410 kernel: pnp: PnP ACPI: found 1 devices Apr 13 19:22:39.942418 kernel: NET: Registered PF_INET protocol family Apr 13 19:22:39.942426 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 13 19:22:39.942437 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Apr 13 19:22:39.942445 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 13 19:22:39.943112 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Apr 13 19:22:39.943144 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Apr 13 19:22:39.943153 kernel: TCP: Hash tables configured (established 32768 bind 32768) Apr 13 19:22:39.943161 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 13 19:22:39.943170 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 13 19:22:39.943178 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 13 19:22:39.943318 kernel: pci 0000:02:00.0: enabling device (0000 -> 0002) Apr 13 19:22:39.943337 kernel: PCI: CLS 0 bytes, default 64 Apr 13 19:22:39.943346 kernel: kvm [1]: HYP mode not available Apr 13 19:22:39.943354 kernel: Initialise system trusted keyrings Apr 13 19:22:39.943362 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Apr 13 19:22:39.943370 kernel: Key type asymmetric registered Apr 13 19:22:39.943378 kernel: Asymmetric key parser 'x509' registered Apr 13 19:22:39.943386 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Apr 13 19:22:39.943394 kernel: io scheduler mq-deadline registered Apr 13 19:22:39.943402 kernel: io scheduler kyber registered Apr 13 19:22:39.943412 kernel: io scheduler bfq registered Apr 13 19:22:39.943420 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Apr 13 19:22:39.943598 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 50 Apr 13 19:22:39.943693 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 50 Apr 13 19:22:39.943768 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 13 19:22:39.943867 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 51 Apr 13 19:22:39.943938 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 51 Apr 13 19:22:39.944009 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 13 19:22:39.944081 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 52 Apr 13 19:22:39.944148 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 52 Apr 13 19:22:39.944213 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 13 19:22:39.944284 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 53 Apr 13 19:22:39.944354 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 53 Apr 13 19:22:39.944533 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 13 19:22:39.944627 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 54 Apr 13 19:22:39.944774 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 54 Apr 13 19:22:39.944849 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 13 19:22:39.944920 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 55 Apr 13 19:22:39.944989 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 55 Apr 13 19:22:39.945064 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 13 19:22:39.945134 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 56 Apr 13 19:22:39.945247 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 56 Apr 13 19:22:39.945331 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 13 19:22:39.945406 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 57 Apr 13 19:22:39.945524 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 57 Apr 13 19:22:39.945603 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 13 19:22:39.945615 kernel: ACPI: \_SB_.PCI0.GSI3: Enabled at IRQ 38 Apr 13 19:22:39.945728 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 58 Apr 13 19:22:39.945804 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 58 Apr 13 19:22:39.945870 kernel: pcieport 0000:00:03.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 13 19:22:39.945881 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Apr 13 19:22:39.945893 kernel: ACPI: button: Power Button [PWRB] Apr 13 19:22:39.945917 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Apr 13 19:22:39.946003 kernel: virtio-pci 0000:04:00.0: enabling device (0000 -> 0002) Apr 13 19:22:39.946083 kernel: virtio-pci 0000:07:00.0: enabling device (0000 -> 0002) Apr 13 19:22:39.946094 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 13 19:22:39.946102 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Apr 13 19:22:39.946172 kernel: serial 0000:00:04.0: enabling device (0000 -> 0001) Apr 13 19:22:39.946187 kernel: 0000:00:04.0: ttyS0 at I/O 0xa000 (irq = 45, base_baud = 115200) is a 16550A Apr 13 19:22:39.946195 kernel: thunder_xcv, ver 1.0 Apr 13 19:22:39.946205 kernel: thunder_bgx, ver 1.0 Apr 13 19:22:39.946213 kernel: nicpf, ver 1.0 Apr 13 19:22:39.946221 kernel: nicvf, ver 1.0 Apr 13 19:22:39.946302 kernel: rtc-efi rtc-efi.0: registered as rtc0 Apr 13 19:22:39.946366 kernel: rtc-efi rtc-efi.0: setting system clock to 2026-04-13T19:22:39 UTC (1776108159) Apr 13 19:22:39.946377 kernel: hid: raw HID events driver (C) Jiri Kosina Apr 13 19:22:39.946385 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Apr 13 19:22:39.946393 kernel: watchdog: Delayed init of the lockup detector failed: -19 Apr 13 19:22:39.946403 kernel: watchdog: Hard watchdog permanently disabled Apr 13 19:22:39.946411 kernel: NET: Registered PF_INET6 protocol family Apr 13 19:22:39.946418 kernel: Segment Routing with IPv6 Apr 13 19:22:39.946426 kernel: In-situ OAM (IOAM) with IPv6 Apr 13 19:22:39.946434 kernel: NET: Registered PF_PACKET protocol family Apr 13 19:22:39.946442 kernel: Key type dns_resolver registered Apr 13 19:22:39.946450 kernel: registered taskstats version 1 Apr 13 19:22:39.946506 kernel: Loading compiled-in X.509 certificates Apr 13 19:22:39.946515 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: 51f707dd0fb1eacaaa32bdbd733952de038a5bd7' Apr 13 19:22:39.946526 kernel: Key type .fscrypt registered Apr 13 19:22:39.946533 kernel: Key type fscrypt-provisioning registered Apr 13 19:22:39.946541 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 13 19:22:39.946549 kernel: ima: Allocated hash algorithm: sha1 Apr 13 19:22:39.946556 kernel: ima: No architecture policies found Apr 13 19:22:39.946565 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Apr 13 19:22:39.946572 kernel: clk: Disabling unused clocks Apr 13 19:22:39.946580 kernel: Freeing unused kernel memory: 39424K Apr 13 19:22:39.946588 kernel: Run /init as init process Apr 13 19:22:39.946597 kernel: with arguments: Apr 13 19:22:39.946605 kernel: /init Apr 13 19:22:39.946612 kernel: with environment: Apr 13 19:22:39.946620 kernel: HOME=/ Apr 13 19:22:39.946628 kernel: TERM=linux Apr 13 19:22:39.946638 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 13 19:22:39.946649 systemd[1]: Detected virtualization kvm. Apr 13 19:22:39.946657 systemd[1]: Detected architecture arm64. Apr 13 19:22:39.946667 systemd[1]: Running in initrd. Apr 13 19:22:39.946688 systemd[1]: No hostname configured, using default hostname. Apr 13 19:22:39.946696 systemd[1]: Hostname set to . Apr 13 19:22:39.946704 systemd[1]: Initializing machine ID from VM UUID. Apr 13 19:22:39.946713 systemd[1]: Queued start job for default target initrd.target. Apr 13 19:22:39.946723 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 13 19:22:39.946733 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 13 19:22:39.946744 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 13 19:22:39.946757 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 13 19:22:39.946766 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 13 19:22:39.946774 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 13 19:22:39.946784 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 13 19:22:39.946792 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 13 19:22:39.946801 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 13 19:22:39.946809 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 13 19:22:39.946819 systemd[1]: Reached target paths.target - Path Units. Apr 13 19:22:39.946827 systemd[1]: Reached target slices.target - Slice Units. Apr 13 19:22:39.946835 systemd[1]: Reached target swap.target - Swaps. Apr 13 19:22:39.946844 systemd[1]: Reached target timers.target - Timer Units. Apr 13 19:22:39.946852 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 13 19:22:39.946860 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 13 19:22:39.946869 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 13 19:22:39.946877 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 13 19:22:39.946889 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 13 19:22:39.946898 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 13 19:22:39.946906 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 13 19:22:39.946914 systemd[1]: Reached target sockets.target - Socket Units. Apr 13 19:22:39.946923 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 13 19:22:39.946931 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 13 19:22:39.946939 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 13 19:22:39.946948 systemd[1]: Starting systemd-fsck-usr.service... Apr 13 19:22:39.946956 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 13 19:22:39.946966 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 13 19:22:39.946975 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 13 19:22:39.946983 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 13 19:22:39.947019 systemd-journald[238]: Collecting audit messages is disabled. Apr 13 19:22:39.947042 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 13 19:22:39.947050 systemd[1]: Finished systemd-fsck-usr.service. Apr 13 19:22:39.947060 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 13 19:22:39.947069 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 13 19:22:39.947078 kernel: Bridge firewalling registered Apr 13 19:22:39.947086 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 13 19:22:39.947095 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 13 19:22:39.947104 systemd-journald[238]: Journal started Apr 13 19:22:39.947124 systemd-journald[238]: Runtime Journal (/run/log/journal/e1b2fa96966d4a67ab0d949540d9016f) is 8.0M, max 76.6M, 68.6M free. Apr 13 19:22:39.910282 systemd-modules-load[239]: Inserted module 'overlay' Apr 13 19:22:39.937227 systemd-modules-load[239]: Inserted module 'br_netfilter' Apr 13 19:22:39.950644 systemd[1]: Started systemd-journald.service - Journal Service. Apr 13 19:22:39.957140 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 13 19:22:39.961633 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 13 19:22:39.973759 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 13 19:22:39.976711 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 13 19:22:39.985797 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 13 19:22:39.988620 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 13 19:22:40.001884 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 13 19:22:40.011648 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 13 19:22:40.016592 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 13 19:22:40.020784 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 13 19:22:40.031789 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 13 19:22:40.037762 dracut-cmdline[269]: dracut-dracut-053 Apr 13 19:22:40.037762 dracut-cmdline[269]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=06a955818c1cb85215c4fc3bbca340081bcaba3fb92fe20a32668615ff23854b Apr 13 19:22:40.070995 systemd-resolved[278]: Positive Trust Anchors: Apr 13 19:22:40.071824 systemd-resolved[278]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 13 19:22:40.072823 systemd-resolved[278]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 13 19:22:40.082493 systemd-resolved[278]: Defaulting to hostname 'linux'. Apr 13 19:22:40.083616 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 13 19:22:40.084427 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 13 19:22:40.117527 kernel: SCSI subsystem initialized Apr 13 19:22:40.121498 kernel: Loading iSCSI transport class v2.0-870. Apr 13 19:22:40.129513 kernel: iscsi: registered transport (tcp) Apr 13 19:22:40.144565 kernel: iscsi: registered transport (qla4xxx) Apr 13 19:22:40.144667 kernel: QLogic iSCSI HBA Driver Apr 13 19:22:40.196381 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 13 19:22:40.204801 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 13 19:22:40.225913 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 13 19:22:40.226033 kernel: device-mapper: uevent: version 1.0.3 Apr 13 19:22:40.226079 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Apr 13 19:22:40.278534 kernel: raid6: neonx8 gen() 15673 MB/s Apr 13 19:22:40.295492 kernel: raid6: neonx4 gen() 13942 MB/s Apr 13 19:22:40.312508 kernel: raid6: neonx2 gen() 13015 MB/s Apr 13 19:22:40.329524 kernel: raid6: neonx1 gen() 9865 MB/s Apr 13 19:22:40.346510 kernel: raid6: int64x8 gen() 6849 MB/s Apr 13 19:22:40.363521 kernel: raid6: int64x4 gen() 7234 MB/s Apr 13 19:22:40.380555 kernel: raid6: int64x2 gen() 6038 MB/s Apr 13 19:22:40.397522 kernel: raid6: int64x1 gen() 4965 MB/s Apr 13 19:22:40.397604 kernel: raid6: using algorithm neonx8 gen() 15673 MB/s Apr 13 19:22:40.414532 kernel: raid6: .... xor() 11759 MB/s, rmw enabled Apr 13 19:22:40.414608 kernel: raid6: using neon recovery algorithm Apr 13 19:22:40.419944 kernel: xor: measuring software checksum speed Apr 13 19:22:40.419989 kernel: 8regs : 14460 MB/sec Apr 13 19:22:40.420504 kernel: 32regs : 18070 MB/sec Apr 13 19:22:40.420536 kernel: arm64_neon : 24074 MB/sec Apr 13 19:22:40.421498 kernel: xor: using function: arm64_neon (24074 MB/sec) Apr 13 19:22:40.477331 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 13 19:22:40.492555 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 13 19:22:40.499833 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 13 19:22:40.515621 systemd-udevd[456]: Using default interface naming scheme 'v255'. Apr 13 19:22:40.519138 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 13 19:22:40.528796 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 13 19:22:40.546382 dracut-pre-trigger[465]: rd.md=0: removing MD RAID activation Apr 13 19:22:40.587577 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 13 19:22:40.593700 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 13 19:22:40.646864 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 13 19:22:40.654641 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 13 19:22:40.672787 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 13 19:22:40.678078 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 13 19:22:40.682657 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 13 19:22:40.683425 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 13 19:22:40.692754 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 13 19:22:40.723245 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 13 19:22:40.749896 kernel: scsi host0: Virtio SCSI HBA Apr 13 19:22:40.763512 kernel: scsi 0:0:0:0: CD-ROM QEMU QEMU CD-ROM 2.5+ PQ: 0 ANSI: 5 Apr 13 19:22:40.764123 kernel: scsi 0:0:0:1: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Apr 13 19:22:40.777788 kernel: ACPI: bus type USB registered Apr 13 19:22:40.777841 kernel: usbcore: registered new interface driver usbfs Apr 13 19:22:40.778836 kernel: usbcore: registered new interface driver hub Apr 13 19:22:40.779913 kernel: usbcore: registered new device driver usb Apr 13 19:22:40.789900 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 13 19:22:40.790013 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 13 19:22:40.791431 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 13 19:22:40.794184 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 13 19:22:40.794357 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 13 19:22:40.796266 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 13 19:22:40.808814 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 13 19:22:40.817064 kernel: sr 0:0:0:0: Power-on or device reset occurred Apr 13 19:22:40.818017 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 16x/50x cd/rw xa/form2 cdda tray Apr 13 19:22:40.818894 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Apr 13 19:22:40.822241 kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0 Apr 13 19:22:40.824940 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 13 19:22:40.833082 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Apr 13 19:22:40.833334 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Apr 13 19:22:40.838496 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Apr 13 19:22:40.838621 kernel: sd 0:0:0:1: Power-on or device reset occurred Apr 13 19:22:40.835256 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 13 19:22:40.840640 kernel: sd 0:0:0:1: [sda] 80003072 512-byte logical blocks: (41.0 GB/38.1 GiB) Apr 13 19:22:40.840866 kernel: sd 0:0:0:1: [sda] Write Protect is off Apr 13 19:22:40.842179 kernel: sd 0:0:0:1: [sda] Mode Sense: 63 00 00 08 Apr 13 19:22:40.842347 kernel: sd 0:0:0:1: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Apr 13 19:22:40.845514 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Apr 13 19:22:40.845775 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Apr 13 19:22:40.845866 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Apr 13 19:22:40.848526 kernel: hub 1-0:1.0: USB hub found Apr 13 19:22:40.848740 kernel: hub 1-0:1.0: 4 ports detected Apr 13 19:22:40.851014 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Apr 13 19:22:40.851069 kernel: GPT:17805311 != 80003071 Apr 13 19:22:40.851079 kernel: GPT:Alternate GPT header not at the end of the disk. Apr 13 19:22:40.854600 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Apr 13 19:22:40.854869 kernel: hub 2-0:1.0: USB hub found Apr 13 19:22:40.854969 kernel: hub 2-0:1.0: 4 ports detected Apr 13 19:22:40.855049 kernel: GPT:17805311 != 80003071 Apr 13 19:22:40.855059 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 13 19:22:40.855076 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 13 19:22:40.856481 kernel: sd 0:0:0:1: [sda] Attached SCSI disk Apr 13 19:22:40.867331 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 13 19:22:40.906491 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/sda6 scanned by (udev-worker) (513) Apr 13 19:22:40.907483 kernel: BTRFS: device fsid ed38fcff-9752-482a-82dd-c0f0fcf94cdd devid 1 transid 33 /dev/sda3 scanned by (udev-worker) (500) Apr 13 19:22:40.915237 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Apr 13 19:22:40.923004 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Apr 13 19:22:40.933733 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Apr 13 19:22:40.934548 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Apr 13 19:22:40.943845 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Apr 13 19:22:40.959490 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 13 19:22:40.970530 disk-uuid[573]: Primary Header is updated. Apr 13 19:22:40.970530 disk-uuid[573]: Secondary Entries is updated. Apr 13 19:22:40.970530 disk-uuid[573]: Secondary Header is updated. Apr 13 19:22:40.979500 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 13 19:22:40.984493 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 13 19:22:40.990514 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 13 19:22:41.092495 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Apr 13 19:22:41.228609 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input1 Apr 13 19:22:41.229163 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Apr 13 19:22:41.229567 kernel: usbcore: registered new interface driver usbhid Apr 13 19:22:41.229598 kernel: usbhid: USB HID core driver Apr 13 19:22:41.335534 kernel: usb 1-2: new high-speed USB device number 3 using xhci_hcd Apr 13 19:22:41.465535 kernel: input: QEMU QEMU USB Keyboard as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-2/1-2:1.0/0003:0627:0001.0002/input/input2 Apr 13 19:22:41.520524 kernel: hid-generic 0003:0627:0001.0002: input,hidraw1: USB HID v1.11 Keyboard [QEMU QEMU USB Keyboard] on usb-0000:02:00.0-2/input0 Apr 13 19:22:41.997639 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 13 19:22:41.997714 disk-uuid[574]: The operation has completed successfully. Apr 13 19:22:42.051194 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 13 19:22:42.051320 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 13 19:22:42.062694 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 13 19:22:42.066916 sh[593]: Success Apr 13 19:22:42.079484 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Apr 13 19:22:42.129139 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 13 19:22:42.138203 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 13 19:22:42.143756 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 13 19:22:42.160905 kernel: BTRFS info (device dm-0): first mount of filesystem ed38fcff-9752-482a-82dd-c0f0fcf94cdd Apr 13 19:22:42.160978 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Apr 13 19:22:42.161001 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Apr 13 19:22:42.161023 kernel: BTRFS info (device dm-0): disabling log replay at mount time Apr 13 19:22:42.161889 kernel: BTRFS info (device dm-0): using free space tree Apr 13 19:22:42.168490 kernel: BTRFS info (device dm-0): enabling ssd optimizations Apr 13 19:22:42.170255 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 13 19:22:42.171928 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 13 19:22:42.177736 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 13 19:22:42.182889 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 13 19:22:42.195033 kernel: BTRFS info (device sda6): first mount of filesystem 82e51161-2104-45f8-9ecc-3d62852b78d3 Apr 13 19:22:42.195135 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Apr 13 19:22:42.195161 kernel: BTRFS info (device sda6): using free space tree Apr 13 19:22:42.202974 kernel: BTRFS info (device sda6): enabling ssd optimizations Apr 13 19:22:42.203061 kernel: BTRFS info (device sda6): auto enabling async discard Apr 13 19:22:42.215486 kernel: BTRFS info (device sda6): last unmount of filesystem 82e51161-2104-45f8-9ecc-3d62852b78d3 Apr 13 19:22:42.216409 systemd[1]: mnt-oem.mount: Deactivated successfully. Apr 13 19:22:42.223949 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 13 19:22:42.229754 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 13 19:22:42.333440 ignition[675]: Ignition 2.19.0 Apr 13 19:22:42.333468 ignition[675]: Stage: fetch-offline Apr 13 19:22:42.333529 ignition[675]: no configs at "/usr/lib/ignition/base.d" Apr 13 19:22:42.333538 ignition[675]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 13 19:22:42.335735 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 13 19:22:42.333717 ignition[675]: parsed url from cmdline: "" Apr 13 19:22:42.338873 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 13 19:22:42.333720 ignition[675]: no config URL provided Apr 13 19:22:42.333725 ignition[675]: reading system config file "/usr/lib/ignition/user.ign" Apr 13 19:22:42.333733 ignition[675]: no config at "/usr/lib/ignition/user.ign" Apr 13 19:22:42.333738 ignition[675]: failed to fetch config: resource requires networking Apr 13 19:22:42.333941 ignition[675]: Ignition finished successfully Apr 13 19:22:42.351654 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 13 19:22:42.373163 systemd-networkd[781]: lo: Link UP Apr 13 19:22:42.373177 systemd-networkd[781]: lo: Gained carrier Apr 13 19:22:42.375135 systemd-networkd[781]: Enumeration completed Apr 13 19:22:42.375353 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 13 19:22:42.376246 systemd-networkd[781]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 13 19:22:42.376249 systemd-networkd[781]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 13 19:22:42.377753 systemd-networkd[781]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 13 19:22:42.377756 systemd-networkd[781]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 13 19:22:42.378290 systemd-networkd[781]: eth0: Link UP Apr 13 19:22:42.378294 systemd-networkd[781]: eth0: Gained carrier Apr 13 19:22:42.378302 systemd-networkd[781]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 13 19:22:42.379333 systemd[1]: Reached target network.target - Network. Apr 13 19:22:42.385946 systemd-networkd[781]: eth1: Link UP Apr 13 19:22:42.385949 systemd-networkd[781]: eth1: Gained carrier Apr 13 19:22:42.385959 systemd-networkd[781]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 13 19:22:42.387740 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Apr 13 19:22:42.403067 ignition[783]: Ignition 2.19.0 Apr 13 19:22:42.403077 ignition[783]: Stage: fetch Apr 13 19:22:42.403258 ignition[783]: no configs at "/usr/lib/ignition/base.d" Apr 13 19:22:42.403268 ignition[783]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 13 19:22:42.403356 ignition[783]: parsed url from cmdline: "" Apr 13 19:22:42.403359 ignition[783]: no config URL provided Apr 13 19:22:42.403364 ignition[783]: reading system config file "/usr/lib/ignition/user.ign" Apr 13 19:22:42.403370 ignition[783]: no config at "/usr/lib/ignition/user.ign" Apr 13 19:22:42.403388 ignition[783]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 Apr 13 19:22:42.404134 ignition[783]: GET error: Get "http://169.254.169.254/hetzner/v1/userdata": dial tcp 169.254.169.254:80: connect: network is unreachable Apr 13 19:22:42.430575 systemd-networkd[781]: eth1: DHCPv4 address 10.0.0.3/32 acquired from 10.0.0.1 Apr 13 19:22:42.438590 systemd-networkd[781]: eth0: DHCPv4 address 49.13.49.84/32, gateway 172.31.1.1 acquired from 172.31.1.1 Apr 13 19:22:42.604709 ignition[783]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #2 Apr 13 19:22:42.618438 ignition[783]: GET result: OK Apr 13 19:22:42.618699 ignition[783]: parsing config with SHA512: ad2314dc7859c491f09f79e24692a77b456311b8a399c6595e4a1fe2e45e62aa5bf2115013fb437e8196b88c9daac1bbb20dec8510b60679f9d8db7eedf9346c Apr 13 19:22:42.626184 unknown[783]: fetched base config from "system" Apr 13 19:22:42.627013 unknown[783]: fetched base config from "system" Apr 13 19:22:42.627022 unknown[783]: fetched user config from "hetzner" Apr 13 19:22:42.629124 ignition[783]: fetch: fetch complete Apr 13 19:22:42.629134 ignition[783]: fetch: fetch passed Apr 13 19:22:42.630995 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Apr 13 19:22:42.629218 ignition[783]: Ignition finished successfully Apr 13 19:22:42.638718 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 13 19:22:42.653916 ignition[790]: Ignition 2.19.0 Apr 13 19:22:42.653933 ignition[790]: Stage: kargs Apr 13 19:22:42.654203 ignition[790]: no configs at "/usr/lib/ignition/base.d" Apr 13 19:22:42.654213 ignition[790]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 13 19:22:42.655576 ignition[790]: kargs: kargs passed Apr 13 19:22:42.655642 ignition[790]: Ignition finished successfully Apr 13 19:22:42.659368 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 13 19:22:42.668826 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 13 19:22:42.684841 ignition[796]: Ignition 2.19.0 Apr 13 19:22:42.684852 ignition[796]: Stage: disks Apr 13 19:22:42.685049 ignition[796]: no configs at "/usr/lib/ignition/base.d" Apr 13 19:22:42.688020 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 13 19:22:42.685058 ignition[796]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 13 19:22:42.689345 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 13 19:22:42.686028 ignition[796]: disks: disks passed Apr 13 19:22:42.690652 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 13 19:22:42.686083 ignition[796]: Ignition finished successfully Apr 13 19:22:42.692930 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 13 19:22:42.694708 systemd[1]: Reached target sysinit.target - System Initialization. Apr 13 19:22:42.696388 systemd[1]: Reached target basic.target - Basic System. Apr 13 19:22:42.708805 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 13 19:22:42.728832 systemd-fsck[804]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Apr 13 19:22:42.732568 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 13 19:22:42.741725 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 13 19:22:42.795514 kernel: EXT4-fs (sda9): mounted filesystem 775210d8-8fbf-4f17-be2d-56007930061c r/w with ordered data mode. Quota mode: none. Apr 13 19:22:42.796260 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 13 19:22:42.798220 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 13 19:22:42.810716 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 13 19:22:42.813588 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 13 19:22:42.822708 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Apr 13 19:22:42.823946 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 13 19:22:42.823997 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 13 19:22:42.833085 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by mount (812) Apr 13 19:22:42.833114 kernel: BTRFS info (device sda6): first mount of filesystem 82e51161-2104-45f8-9ecc-3d62852b78d3 Apr 13 19:22:42.832336 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 13 19:22:42.835818 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Apr 13 19:22:42.835846 kernel: BTRFS info (device sda6): using free space tree Apr 13 19:22:42.842868 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 13 19:22:42.847301 kernel: BTRFS info (device sda6): enabling ssd optimizations Apr 13 19:22:42.847366 kernel: BTRFS info (device sda6): auto enabling async discard Apr 13 19:22:42.857848 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 13 19:22:42.891576 coreos-metadata[814]: Apr 13 19:22:42.891 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 Apr 13 19:22:42.895133 coreos-metadata[814]: Apr 13 19:22:42.895 INFO Fetch successful Apr 13 19:22:42.896259 coreos-metadata[814]: Apr 13 19:22:42.896 INFO wrote hostname ci-4081-3-7-e-ee64700b2a to /sysroot/etc/hostname Apr 13 19:22:42.900854 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Apr 13 19:22:42.904175 initrd-setup-root[840]: cut: /sysroot/etc/passwd: No such file or directory Apr 13 19:22:42.909507 initrd-setup-root[847]: cut: /sysroot/etc/group: No such file or directory Apr 13 19:22:42.914916 initrd-setup-root[854]: cut: /sysroot/etc/shadow: No such file or directory Apr 13 19:22:42.919916 initrd-setup-root[861]: cut: /sysroot/etc/gshadow: No such file or directory Apr 13 19:22:43.031519 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 13 19:22:43.037626 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 13 19:22:43.041758 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 13 19:22:43.050506 kernel: BTRFS info (device sda6): last unmount of filesystem 82e51161-2104-45f8-9ecc-3d62852b78d3 Apr 13 19:22:43.079570 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 13 19:22:43.083469 ignition[928]: INFO : Ignition 2.19.0 Apr 13 19:22:43.083469 ignition[928]: INFO : Stage: mount Apr 13 19:22:43.083469 ignition[928]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 13 19:22:43.083469 ignition[928]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 13 19:22:43.087633 ignition[928]: INFO : mount: mount passed Apr 13 19:22:43.087633 ignition[928]: INFO : Ignition finished successfully Apr 13 19:22:43.086359 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 13 19:22:43.093749 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 13 19:22:43.160443 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 13 19:22:43.165735 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 13 19:22:43.178716 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (942) Apr 13 19:22:43.180669 kernel: BTRFS info (device sda6): first mount of filesystem 82e51161-2104-45f8-9ecc-3d62852b78d3 Apr 13 19:22:43.180735 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Apr 13 19:22:43.180754 kernel: BTRFS info (device sda6): using free space tree Apr 13 19:22:43.185076 kernel: BTRFS info (device sda6): enabling ssd optimizations Apr 13 19:22:43.185184 kernel: BTRFS info (device sda6): auto enabling async discard Apr 13 19:22:43.188274 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 13 19:22:43.214469 ignition[958]: INFO : Ignition 2.19.0 Apr 13 19:22:43.214469 ignition[958]: INFO : Stage: files Apr 13 19:22:43.217733 ignition[958]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 13 19:22:43.217733 ignition[958]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 13 19:22:43.217733 ignition[958]: DEBUG : files: compiled without relabeling support, skipping Apr 13 19:22:43.217733 ignition[958]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 13 19:22:43.217733 ignition[958]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 13 19:22:43.225897 ignition[958]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 13 19:22:43.225897 ignition[958]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 13 19:22:43.225897 ignition[958]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 13 19:22:43.223373 unknown[958]: wrote ssh authorized keys file for user: core Apr 13 19:22:43.229936 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Apr 13 19:22:43.229936 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Apr 13 19:22:43.369059 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Apr 13 19:22:43.536903 systemd-networkd[781]: eth1: Gained IPv6LL Apr 13 19:22:44.177234 systemd-networkd[781]: eth0: Gained IPv6LL Apr 13 19:22:49.235482 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Apr 13 19:22:49.235482 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Apr 13 19:22:49.235482 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Apr 13 19:22:49.235482 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 13 19:22:49.235482 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 13 19:22:49.235482 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 13 19:22:49.235482 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 13 19:22:49.235482 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 13 19:22:49.235482 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 13 19:22:49.235482 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 13 19:22:49.235482 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 13 19:22:49.235482 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.35.1-arm64.raw" Apr 13 19:22:49.235482 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.35.1-arm64.raw" Apr 13 19:22:49.235482 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.35.1-arm64.raw" Apr 13 19:22:49.257559 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.35.1-arm64.raw: attempt #1 Apr 13 19:22:49.625234 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Apr 13 19:22:50.210401 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.35.1-arm64.raw" Apr 13 19:22:50.210401 ignition[958]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Apr 13 19:22:50.214599 ignition[958]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 13 19:22:50.214599 ignition[958]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 13 19:22:50.214599 ignition[958]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Apr 13 19:22:50.214599 ignition[958]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Apr 13 19:22:50.214599 ignition[958]: INFO : files: op(d): op(e): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Apr 13 19:22:50.214599 ignition[958]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Apr 13 19:22:50.214599 ignition[958]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Apr 13 19:22:50.214599 ignition[958]: INFO : files: op(f): [started] setting preset to enabled for "prepare-helm.service" Apr 13 19:22:50.214599 ignition[958]: INFO : files: op(f): [finished] setting preset to enabled for "prepare-helm.service" Apr 13 19:22:50.214599 ignition[958]: INFO : files: createResultFile: createFiles: op(10): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 13 19:22:50.214599 ignition[958]: INFO : files: createResultFile: createFiles: op(10): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 13 19:22:50.214599 ignition[958]: INFO : files: files passed Apr 13 19:22:50.214599 ignition[958]: INFO : Ignition finished successfully Apr 13 19:22:50.215952 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 13 19:22:50.223025 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 13 19:22:50.232326 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 13 19:22:50.233637 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 13 19:22:50.233740 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 13 19:22:50.248329 initrd-setup-root-after-ignition[988]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 13 19:22:50.248329 initrd-setup-root-after-ignition[988]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 13 19:22:50.251173 initrd-setup-root-after-ignition[992]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 13 19:22:50.253339 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 13 19:22:50.255828 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 13 19:22:50.265834 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 13 19:22:50.311558 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 13 19:22:50.313508 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 13 19:22:50.314770 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 13 19:22:50.316089 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 13 19:22:50.317576 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 13 19:22:50.321669 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 13 19:22:50.339553 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 13 19:22:50.348802 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 13 19:22:50.359538 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 13 19:22:50.361333 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 13 19:22:50.363173 systemd[1]: Stopped target timers.target - Timer Units. Apr 13 19:22:50.364550 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 13 19:22:50.365353 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 13 19:22:50.368034 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 13 19:22:50.369115 systemd[1]: Stopped target basic.target - Basic System. Apr 13 19:22:50.371168 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 13 19:22:50.372437 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 13 19:22:50.374038 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 13 19:22:50.375322 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 13 19:22:50.376579 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 13 19:22:50.378011 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 13 19:22:50.379490 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 13 19:22:50.380974 systemd[1]: Stopped target swap.target - Swaps. Apr 13 19:22:50.382352 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 13 19:22:50.382502 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 13 19:22:50.384093 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 13 19:22:50.385444 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 13 19:22:50.387014 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 13 19:22:50.390521 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 13 19:22:50.391690 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 13 19:22:50.391850 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 13 19:22:50.394919 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 13 19:22:50.395089 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 13 19:22:50.397429 systemd[1]: ignition-files.service: Deactivated successfully. Apr 13 19:22:50.397566 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 13 19:22:50.398910 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Apr 13 19:22:50.399020 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Apr 13 19:22:50.406934 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 13 19:22:50.411876 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 13 19:22:50.413847 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 13 19:22:50.414054 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 13 19:22:50.423587 ignition[1012]: INFO : Ignition 2.19.0 Apr 13 19:22:50.423587 ignition[1012]: INFO : Stage: umount Apr 13 19:22:50.423587 ignition[1012]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 13 19:22:50.423587 ignition[1012]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 13 19:22:50.423587 ignition[1012]: INFO : umount: umount passed Apr 13 19:22:50.423587 ignition[1012]: INFO : Ignition finished successfully Apr 13 19:22:50.419843 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 13 19:22:50.419966 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 13 19:22:50.430820 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 13 19:22:50.431578 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 13 19:22:50.433299 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 13 19:22:50.436303 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 13 19:22:50.438233 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 13 19:22:50.438941 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 13 19:22:50.439682 systemd[1]: ignition-fetch.service: Deactivated successfully. Apr 13 19:22:50.439732 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Apr 13 19:22:50.442726 systemd[1]: Stopped target network.target - Network. Apr 13 19:22:50.444064 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 13 19:22:50.444154 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 13 19:22:50.446057 systemd[1]: Stopped target paths.target - Path Units. Apr 13 19:22:50.448057 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 13 19:22:50.451580 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 13 19:22:50.452421 systemd[1]: Stopped target slices.target - Slice Units. Apr 13 19:22:50.454324 systemd[1]: Stopped target sockets.target - Socket Units. Apr 13 19:22:50.455515 systemd[1]: iscsid.socket: Deactivated successfully. Apr 13 19:22:50.455600 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 13 19:22:50.456612 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 13 19:22:50.456686 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 13 19:22:50.457714 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 13 19:22:50.457773 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 13 19:22:50.458743 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 13 19:22:50.458789 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 13 19:22:50.459965 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 13 19:22:50.460926 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 13 19:22:50.463196 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 13 19:22:50.463912 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 13 19:22:50.464014 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 13 19:22:50.465041 systemd-networkd[781]: eth1: DHCPv6 lease lost Apr 13 19:22:50.466030 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 13 19:22:50.466128 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 13 19:22:50.466816 systemd-networkd[781]: eth0: DHCPv6 lease lost Apr 13 19:22:50.470761 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 13 19:22:50.470919 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 13 19:22:50.474163 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 13 19:22:50.474322 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 13 19:22:50.478048 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 13 19:22:50.479983 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 13 19:22:50.482222 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 13 19:22:50.482565 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 13 19:22:50.490775 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 13 19:22:50.491357 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 13 19:22:50.491433 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 13 19:22:50.494475 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 13 19:22:50.494545 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 13 19:22:50.495274 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 13 19:22:50.495328 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 13 19:22:50.496429 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 13 19:22:50.496513 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 13 19:22:50.501111 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 13 19:22:50.513375 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 13 19:22:50.514330 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 13 19:22:50.517112 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 13 19:22:50.517224 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 13 19:22:50.519174 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 13 19:22:50.519245 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 13 19:22:50.521562 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 13 19:22:50.521605 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 13 19:22:50.523343 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 13 19:22:50.523413 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 13 19:22:50.525222 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 13 19:22:50.525283 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 13 19:22:50.527037 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 13 19:22:50.527165 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 13 19:22:50.542943 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 13 19:22:50.545202 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 13 19:22:50.545377 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 13 19:22:50.547739 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Apr 13 19:22:50.547847 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 13 19:22:50.553666 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 13 19:22:50.553740 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 13 19:22:50.554876 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 13 19:22:50.554945 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 13 19:22:50.558670 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 13 19:22:50.558771 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 13 19:22:50.561932 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 13 19:22:50.570920 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 13 19:22:50.582008 systemd[1]: Switching root. Apr 13 19:22:50.615075 systemd-journald[238]: Journal stopped Apr 13 19:22:51.640162 systemd-journald[238]: Received SIGTERM from PID 1 (systemd). Apr 13 19:22:51.640252 kernel: SELinux: policy capability network_peer_controls=1 Apr 13 19:22:51.640265 kernel: SELinux: policy capability open_perms=1 Apr 13 19:22:51.640279 kernel: SELinux: policy capability extended_socket_class=1 Apr 13 19:22:51.640293 kernel: SELinux: policy capability always_check_network=0 Apr 13 19:22:51.640305 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 13 19:22:51.640315 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 13 19:22:51.640325 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 13 19:22:51.640335 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 13 19:22:51.640349 kernel: audit: type=1403 audit(1776108170.767:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Apr 13 19:22:51.640361 systemd[1]: Successfully loaded SELinux policy in 38.141ms. Apr 13 19:22:51.640384 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 12.783ms. Apr 13 19:22:51.640397 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 13 19:22:51.640409 systemd[1]: Detected virtualization kvm. Apr 13 19:22:51.640420 systemd[1]: Detected architecture arm64. Apr 13 19:22:51.640431 systemd[1]: Detected first boot. Apr 13 19:22:51.640442 systemd[1]: Hostname set to . Apr 13 19:22:51.641515 systemd[1]: Initializing machine ID from VM UUID. Apr 13 19:22:51.641572 zram_generator::config[1055]: No configuration found. Apr 13 19:22:51.641589 systemd[1]: Populated /etc with preset unit settings. Apr 13 19:22:51.641607 systemd[1]: initrd-switch-root.service: Deactivated successfully. Apr 13 19:22:51.641618 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Apr 13 19:22:51.641668 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Apr 13 19:22:51.641684 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Apr 13 19:22:51.641694 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Apr 13 19:22:51.641706 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Apr 13 19:22:51.641716 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Apr 13 19:22:51.641728 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Apr 13 19:22:51.641739 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Apr 13 19:22:51.641754 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Apr 13 19:22:51.641765 systemd[1]: Created slice user.slice - User and Session Slice. Apr 13 19:22:51.641777 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 13 19:22:51.641788 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 13 19:22:51.641799 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Apr 13 19:22:51.641810 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Apr 13 19:22:51.641821 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Apr 13 19:22:51.641832 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 13 19:22:51.641848 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Apr 13 19:22:51.641859 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 13 19:22:51.641869 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Apr 13 19:22:51.641880 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Apr 13 19:22:51.641891 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Apr 13 19:22:51.641903 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Apr 13 19:22:51.641915 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 13 19:22:51.641928 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 13 19:22:51.641939 systemd[1]: Reached target slices.target - Slice Units. Apr 13 19:22:51.641949 systemd[1]: Reached target swap.target - Swaps. Apr 13 19:22:51.641960 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Apr 13 19:22:51.641971 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Apr 13 19:22:51.641982 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 13 19:22:51.641994 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 13 19:22:51.642006 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 13 19:22:51.642016 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Apr 13 19:22:51.642029 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Apr 13 19:22:51.642040 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Apr 13 19:22:51.642051 systemd[1]: Mounting media.mount - External Media Directory... Apr 13 19:22:51.642064 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Apr 13 19:22:51.642077 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Apr 13 19:22:51.642087 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Apr 13 19:22:51.642098 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 13 19:22:51.642109 systemd[1]: Reached target machines.target - Containers. Apr 13 19:22:51.642125 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Apr 13 19:22:51.642140 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 13 19:22:51.642156 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 13 19:22:51.642169 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Apr 13 19:22:51.642179 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 13 19:22:51.642190 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 13 19:22:51.642204 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 13 19:22:51.642216 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Apr 13 19:22:51.642227 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 13 19:22:51.642239 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 13 19:22:51.642250 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Apr 13 19:22:51.642260 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Apr 13 19:22:51.642271 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Apr 13 19:22:51.642282 systemd[1]: Stopped systemd-fsck-usr.service. Apr 13 19:22:51.642295 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 13 19:22:51.642308 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 13 19:22:51.642320 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 13 19:22:51.642331 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Apr 13 19:22:51.642342 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 13 19:22:51.642353 systemd[1]: verity-setup.service: Deactivated successfully. Apr 13 19:22:51.642364 systemd[1]: Stopped verity-setup.service. Apr 13 19:22:51.642374 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Apr 13 19:22:51.642385 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Apr 13 19:22:51.642397 systemd[1]: Mounted media.mount - External Media Directory. Apr 13 19:22:51.642407 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Apr 13 19:22:51.642420 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Apr 13 19:22:51.642430 kernel: ACPI: bus type drm_connector registered Apr 13 19:22:51.642442 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Apr 13 19:22:51.642468 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 13 19:22:51.642482 kernel: fuse: init (API version 7.39) Apr 13 19:22:51.643566 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 13 19:22:51.643591 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Apr 13 19:22:51.643610 kernel: loop: module loaded Apr 13 19:22:51.643636 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 13 19:22:51.643651 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 13 19:22:51.643663 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 13 19:22:51.643675 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 13 19:22:51.643691 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 13 19:22:51.643702 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 13 19:22:51.643713 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 13 19:22:51.643724 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Apr 13 19:22:51.643735 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 13 19:22:51.643780 systemd-journald[1129]: Collecting audit messages is disabled. Apr 13 19:22:51.643805 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 13 19:22:51.643816 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 13 19:22:51.643827 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 13 19:22:51.643839 systemd-journald[1129]: Journal started Apr 13 19:22:51.643861 systemd-journald[1129]: Runtime Journal (/run/log/journal/e1b2fa96966d4a67ab0d949540d9016f) is 8.0M, max 76.6M, 68.6M free. Apr 13 19:22:51.303037 systemd[1]: Queued start job for default target multi-user.target. Apr 13 19:22:51.324431 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Apr 13 19:22:51.324864 systemd[1]: systemd-journald.service: Deactivated successfully. Apr 13 19:22:51.648747 systemd[1]: Started systemd-journald.service - Journal Service. Apr 13 19:22:51.647886 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Apr 13 19:22:51.651338 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Apr 13 19:22:51.664676 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 13 19:22:51.669789 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Apr 13 19:22:51.682408 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Apr 13 19:22:51.686742 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 13 19:22:51.686819 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 13 19:22:51.690162 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Apr 13 19:22:51.698779 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Apr 13 19:22:51.701738 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Apr 13 19:22:51.704923 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 13 19:22:51.714663 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Apr 13 19:22:51.717646 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Apr 13 19:22:51.718805 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 13 19:22:51.724728 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Apr 13 19:22:51.726142 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 13 19:22:51.736851 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 13 19:22:51.741736 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Apr 13 19:22:51.747792 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 13 19:22:51.751417 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Apr 13 19:22:51.753730 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Apr 13 19:22:51.754919 systemd-journald[1129]: Time spent on flushing to /var/log/journal/e1b2fa96966d4a67ab0d949540d9016f is 47.147ms for 1125 entries. Apr 13 19:22:51.754919 systemd-journald[1129]: System Journal (/var/log/journal/e1b2fa96966d4a67ab0d949540d9016f) is 8.0M, max 584.8M, 576.8M free. Apr 13 19:22:51.828835 systemd-journald[1129]: Received client request to flush runtime journal. Apr 13 19:22:51.828899 kernel: loop0: detected capacity change from 0 to 114328 Apr 13 19:22:51.757651 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Apr 13 19:22:51.779548 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Apr 13 19:22:51.780528 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Apr 13 19:22:51.792720 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Apr 13 19:22:51.793929 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 13 19:22:51.799452 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Apr 13 19:22:51.832057 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Apr 13 19:22:51.838911 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 13 19:22:51.846789 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Apr 13 19:22:51.856512 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 13 19:22:51.867079 udevadm[1177]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Apr 13 19:22:51.873498 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Apr 13 19:22:51.876352 systemd-tmpfiles[1170]: ACLs are not supported, ignoring. Apr 13 19:22:51.876369 systemd-tmpfiles[1170]: ACLs are not supported, ignoring. Apr 13 19:22:51.881313 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 13 19:22:51.893715 systemd[1]: Starting systemd-sysusers.service - Create System Users... Apr 13 19:22:51.898906 kernel: loop1: detected capacity change from 0 to 114432 Apr 13 19:22:51.940498 kernel: loop2: detected capacity change from 0 to 8 Apr 13 19:22:51.941101 systemd[1]: Finished systemd-sysusers.service - Create System Users. Apr 13 19:22:51.948813 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 13 19:22:51.976559 kernel: loop3: detected capacity change from 0 to 197488 Apr 13 19:22:51.988024 systemd-tmpfiles[1193]: ACLs are not supported, ignoring. Apr 13 19:22:51.988047 systemd-tmpfiles[1193]: ACLs are not supported, ignoring. Apr 13 19:22:51.996154 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 13 19:22:52.016487 kernel: loop4: detected capacity change from 0 to 114328 Apr 13 19:22:52.041526 kernel: loop5: detected capacity change from 0 to 114432 Apr 13 19:22:52.065508 kernel: loop6: detected capacity change from 0 to 8 Apr 13 19:22:52.070490 kernel: loop7: detected capacity change from 0 to 197488 Apr 13 19:22:52.089939 (sd-merge)[1197]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. Apr 13 19:22:52.090991 (sd-merge)[1197]: Merged extensions into '/usr'. Apr 13 19:22:52.098759 systemd[1]: Reloading requested from client PID 1169 ('systemd-sysext') (unit systemd-sysext.service)... Apr 13 19:22:52.098781 systemd[1]: Reloading... Apr 13 19:22:52.204502 zram_generator::config[1223]: No configuration found. Apr 13 19:22:52.390838 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 13 19:22:52.393693 ldconfig[1164]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 13 19:22:52.443580 systemd[1]: Reloading finished in 344 ms. Apr 13 19:22:52.466736 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Apr 13 19:22:52.472369 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Apr 13 19:22:52.482875 systemd[1]: Starting ensure-sysext.service... Apr 13 19:22:52.486822 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 13 19:22:52.491549 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Apr 13 19:22:52.500304 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 13 19:22:52.502892 systemd[1]: Reloading requested from client PID 1261 ('systemctl') (unit ensure-sysext.service)... Apr 13 19:22:52.502925 systemd[1]: Reloading... Apr 13 19:22:52.531958 systemd-tmpfiles[1262]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 13 19:22:52.533212 systemd-tmpfiles[1262]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Apr 13 19:22:52.533939 systemd-tmpfiles[1262]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 13 19:22:52.534161 systemd-tmpfiles[1262]: ACLs are not supported, ignoring. Apr 13 19:22:52.534207 systemd-tmpfiles[1262]: ACLs are not supported, ignoring. Apr 13 19:22:52.542923 systemd-tmpfiles[1262]: Detected autofs mount point /boot during canonicalization of boot. Apr 13 19:22:52.542935 systemd-tmpfiles[1262]: Skipping /boot Apr 13 19:22:52.552233 systemd-tmpfiles[1262]: Detected autofs mount point /boot during canonicalization of boot. Apr 13 19:22:52.552394 systemd-tmpfiles[1262]: Skipping /boot Apr 13 19:22:52.586267 systemd-udevd[1264]: Using default interface naming scheme 'v255'. Apr 13 19:22:52.603526 zram_generator::config[1290]: No configuration found. Apr 13 19:22:52.756567 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 13 19:22:52.809481 kernel: mousedev: PS/2 mouse device common for all mice Apr 13 19:22:52.814870 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Apr 13 19:22:52.815349 systemd[1]: Reloading finished in 311 ms. Apr 13 19:22:52.826333 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 13 19:22:52.843357 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 13 19:22:52.879914 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 13 19:22:52.886331 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Apr 13 19:22:52.898553 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 33 scanned by (udev-worker) (1315) Apr 13 19:22:52.897830 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Apr 13 19:22:52.904353 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 13 19:22:52.911676 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 13 19:22:52.917816 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Apr 13 19:22:52.921395 systemd[1]: Condition check resulted in dev-virtio\x2dports-org.qemu.guest_agent.0.device - /dev/virtio-ports/org.qemu.guest_agent.0 being skipped. Apr 13 19:22:52.926918 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 13 19:22:52.933847 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 13 19:22:52.948003 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 13 19:22:52.951370 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 13 19:22:52.953753 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 13 19:22:52.958533 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 13 19:22:52.958793 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 13 19:22:52.962282 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Apr 13 19:22:52.970006 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 13 19:22:52.972542 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 13 19:22:52.976882 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 13 19:22:52.980501 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 13 19:22:52.981331 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 13 19:22:52.990688 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Apr 13 19:22:52.993185 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Apr 13 19:22:52.996277 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 13 19:22:52.999812 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 13 19:22:53.001950 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 13 19:22:53.004948 systemd[1]: Starting systemd-update-done.service - Update is Completed... Apr 13 19:22:53.007476 systemd[1]: Finished ensure-sysext.service. Apr 13 19:22:53.033786 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Apr 13 19:22:53.034852 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 13 19:22:53.035096 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 13 19:22:53.042814 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 13 19:22:53.061060 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 13 19:22:53.063557 kernel: [drm] pci: virtio-gpu-pci detected at 0000:00:01.0 Apr 13 19:22:53.063679 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Apr 13 19:22:53.063699 kernel: [drm] features: -context_init Apr 13 19:22:53.062594 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 13 19:22:53.069829 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 13 19:22:53.069996 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 13 19:22:53.072139 systemd[1]: Finished systemd-update-done.service - Update is Completed. Apr 13 19:22:53.073435 augenrules[1402]: No rules Apr 13 19:22:53.081638 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 13 19:22:53.085417 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 13 19:22:53.088515 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 13 19:22:53.095054 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 13 19:22:53.096478 kernel: [drm] number of scanouts: 1 Apr 13 19:22:53.096567 kernel: [drm] number of cap sets: 0 Apr 13 19:22:53.106553 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:01.0 on minor 0 Apr 13 19:22:53.108670 systemd[1]: Started systemd-userdbd.service - User Database Manager. Apr 13 19:22:53.117569 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Apr 13 19:22:53.121280 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 13 19:22:53.161605 kernel: Console: switching to colour frame buffer device 160x50 Apr 13 19:22:53.179537 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Apr 13 19:22:53.195833 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Apr 13 19:22:53.196822 systemd[1]: Reached target time-set.target - System Time Set. Apr 13 19:22:53.219885 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 13 19:22:53.227037 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Apr 13 19:22:53.230054 systemd-networkd[1371]: lo: Link UP Apr 13 19:22:53.230352 systemd-networkd[1371]: lo: Gained carrier Apr 13 19:22:53.232845 systemd-networkd[1371]: Enumeration completed Apr 13 19:22:53.237708 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Apr 13 19:22:53.238477 systemd-networkd[1371]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 13 19:22:53.238573 systemd-networkd[1371]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 13 19:22:53.239977 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 13 19:22:53.240514 systemd-networkd[1371]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 13 19:22:53.240600 systemd-networkd[1371]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 13 19:22:53.241870 systemd-networkd[1371]: eth0: Link UP Apr 13 19:22:53.242012 systemd-networkd[1371]: eth0: Gained carrier Apr 13 19:22:53.242073 systemd-networkd[1371]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 13 19:22:53.243271 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Apr 13 19:22:53.249961 systemd-networkd[1371]: eth1: Link UP Apr 13 19:22:53.249971 systemd-networkd[1371]: eth1: Gained carrier Apr 13 19:22:53.249993 systemd-networkd[1371]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 13 19:22:53.278776 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 13 19:22:53.279591 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 13 19:22:53.282183 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Apr 13 19:22:53.295777 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 13 19:22:53.296699 systemd-networkd[1371]: eth1: DHCPv4 address 10.0.0.3/32 acquired from 10.0.0.1 Apr 13 19:22:53.299124 systemd-timesyncd[1397]: Network configuration changed, trying to establish connection. Apr 13 19:22:53.301608 systemd-networkd[1371]: eth0: DHCPv4 address 49.13.49.84/32, gateway 172.31.1.1 acquired from 172.31.1.1 Apr 13 19:22:53.302025 systemd-timesyncd[1397]: Network configuration changed, trying to establish connection. Apr 13 19:22:53.303447 systemd-resolved[1372]: Positive Trust Anchors: Apr 13 19:22:53.303487 systemd-resolved[1372]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 13 19:22:53.303520 systemd-resolved[1372]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 13 19:22:53.304156 systemd-timesyncd[1397]: Network configuration changed, trying to establish connection. Apr 13 19:22:53.310029 systemd-resolved[1372]: Using system hostname 'ci-4081-3-7-e-ee64700b2a'. Apr 13 19:22:53.312128 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 13 19:22:53.313445 systemd[1]: Reached target network.target - Network. Apr 13 19:22:53.314387 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 13 19:22:53.356984 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 13 19:22:53.406063 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Apr 13 19:22:53.419860 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Apr 13 19:22:53.434570 lvm[1439]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 13 19:22:53.463734 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Apr 13 19:22:53.465388 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 13 19:22:53.466696 systemd[1]: Reached target sysinit.target - System Initialization. Apr 13 19:22:53.467864 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Apr 13 19:22:53.468866 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Apr 13 19:22:53.469940 systemd[1]: Started logrotate.timer - Daily rotation of log files. Apr 13 19:22:53.470828 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Apr 13 19:22:53.471734 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Apr 13 19:22:53.472546 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 13 19:22:53.472587 systemd[1]: Reached target paths.target - Path Units. Apr 13 19:22:53.473191 systemd[1]: Reached target timers.target - Timer Units. Apr 13 19:22:53.475361 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Apr 13 19:22:53.478126 systemd[1]: Starting docker.socket - Docker Socket for the API... Apr 13 19:22:53.484869 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Apr 13 19:22:53.487661 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Apr 13 19:22:53.489303 systemd[1]: Listening on docker.socket - Docker Socket for the API. Apr 13 19:22:53.490554 systemd[1]: Reached target sockets.target - Socket Units. Apr 13 19:22:53.491406 systemd[1]: Reached target basic.target - Basic System. Apr 13 19:22:53.492401 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Apr 13 19:22:53.492436 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Apr 13 19:22:53.497701 systemd[1]: Starting containerd.service - containerd container runtime... Apr 13 19:22:53.504806 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Apr 13 19:22:53.505843 lvm[1443]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 13 19:22:53.508977 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Apr 13 19:22:53.515671 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Apr 13 19:22:53.527775 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Apr 13 19:22:53.529143 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Apr 13 19:22:53.536062 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Apr 13 19:22:53.551916 jq[1447]: false Apr 13 19:22:53.551668 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Apr 13 19:22:53.555770 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. Apr 13 19:22:53.567436 coreos-metadata[1445]: Apr 13 19:22:53.564 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 Apr 13 19:22:53.567436 coreos-metadata[1445]: Apr 13 19:22:53.564 INFO Fetch successful Apr 13 19:22:53.567436 coreos-metadata[1445]: Apr 13 19:22:53.564 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 Apr 13 19:22:53.568799 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Apr 13 19:22:53.582572 coreos-metadata[1445]: Apr 13 19:22:53.577 INFO Fetch successful Apr 13 19:22:53.574736 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Apr 13 19:22:53.583127 systemd[1]: Starting systemd-logind.service - User Login Management... Apr 13 19:22:53.585665 extend-filesystems[1448]: Found loop4 Apr 13 19:22:53.585665 extend-filesystems[1448]: Found loop5 Apr 13 19:22:53.585665 extend-filesystems[1448]: Found loop6 Apr 13 19:22:53.585665 extend-filesystems[1448]: Found loop7 Apr 13 19:22:53.585665 extend-filesystems[1448]: Found sda Apr 13 19:22:53.585665 extend-filesystems[1448]: Found sda1 Apr 13 19:22:53.585665 extend-filesystems[1448]: Found sda2 Apr 13 19:22:53.585665 extend-filesystems[1448]: Found sda3 Apr 13 19:22:53.585665 extend-filesystems[1448]: Found usr Apr 13 19:22:53.585665 extend-filesystems[1448]: Found sda4 Apr 13 19:22:53.585665 extend-filesystems[1448]: Found sda6 Apr 13 19:22:53.585665 extend-filesystems[1448]: Found sda7 Apr 13 19:22:53.585665 extend-filesystems[1448]: Found sda9 Apr 13 19:22:53.585413 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Apr 13 19:22:53.619209 dbus-daemon[1446]: [system] SELinux support is enabled Apr 13 19:22:53.680724 extend-filesystems[1448]: Checking size of /dev/sda9 Apr 13 19:22:53.680724 extend-filesystems[1448]: Resized partition /dev/sda9 Apr 13 19:22:53.587130 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Apr 13 19:22:53.694322 extend-filesystems[1487]: resize2fs 1.47.1 (20-May-2024) Apr 13 19:22:53.699595 update_engine[1461]: I20260413 19:22:53.643709 1461 main.cc:92] Flatcar Update Engine starting Apr 13 19:22:53.699595 update_engine[1461]: I20260413 19:22:53.648731 1461 update_check_scheduler.cc:74] Next update check in 10m24s Apr 13 19:22:53.599865 systemd[1]: Starting update-engine.service - Update Engine... Apr 13 19:22:53.611343 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Apr 13 19:22:53.613876 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Apr 13 19:22:53.711170 tar[1472]: linux-arm64/LICENSE Apr 13 19:22:53.711170 tar[1472]: linux-arm64/helm Apr 13 19:22:53.722775 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 9393147 blocks Apr 13 19:22:53.622780 systemd[1]: Started dbus.service - D-Bus System Message Bus. Apr 13 19:22:53.632942 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 13 19:22:53.724564 jq[1467]: true Apr 13 19:22:53.633150 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Apr 13 19:22:53.635011 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 13 19:22:53.635185 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Apr 13 19:22:53.648504 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 13 19:22:53.648595 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Apr 13 19:22:53.650535 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 13 19:22:53.650573 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Apr 13 19:22:53.656126 systemd[1]: Started update-engine.service - Update Engine. Apr 13 19:22:53.670710 systemd[1]: Started locksmithd.service - Cluster reboot manager. Apr 13 19:22:53.686842 systemd[1]: motdgen.service: Deactivated successfully. Apr 13 19:22:53.686903 (ntainerd)[1480]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Apr 13 19:22:53.688556 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Apr 13 19:22:53.749538 jq[1489]: true Apr 13 19:22:53.781959 systemd-logind[1458]: New seat seat0. Apr 13 19:22:53.786211 systemd-logind[1458]: Watching system buttons on /dev/input/event0 (Power Button) Apr 13 19:22:53.786235 systemd-logind[1458]: Watching system buttons on /dev/input/event2 (QEMU QEMU USB Keyboard) Apr 13 19:22:53.789980 systemd[1]: Started systemd-logind.service - User Login Management. Apr 13 19:22:53.812495 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 33 scanned by (udev-worker) (1321) Apr 13 19:22:53.839569 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Apr 13 19:22:53.848358 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Apr 13 19:22:53.928486 kernel: EXT4-fs (sda9): resized filesystem to 9393147 Apr 13 19:22:53.934902 locksmithd[1481]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 13 19:22:53.943544 extend-filesystems[1487]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Apr 13 19:22:53.943544 extend-filesystems[1487]: old_desc_blocks = 1, new_desc_blocks = 5 Apr 13 19:22:53.943544 extend-filesystems[1487]: The filesystem on /dev/sda9 is now 9393147 (4k) blocks long. Apr 13 19:22:53.951701 extend-filesystems[1448]: Resized filesystem in /dev/sda9 Apr 13 19:22:53.951701 extend-filesystems[1448]: Found sr0 Apr 13 19:22:53.948512 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Apr 13 19:22:53.956881 bash[1522]: Updated "/home/core/.ssh/authorized_keys" Apr 13 19:22:53.950434 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 13 19:22:53.951789 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Apr 13 19:22:53.972681 systemd[1]: Starting sshkeys.service... Apr 13 19:22:54.006147 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Apr 13 19:22:54.016832 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Apr 13 19:22:54.064997 coreos-metadata[1530]: Apr 13 19:22:54.064 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 Apr 13 19:22:54.067841 coreos-metadata[1530]: Apr 13 19:22:54.067 INFO Fetch successful Apr 13 19:22:54.069690 unknown[1530]: wrote ssh authorized keys file for user: core Apr 13 19:22:54.111975 update-ssh-keys[1534]: Updated "/home/core/.ssh/authorized_keys" Apr 13 19:22:54.115915 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Apr 13 19:22:54.121644 containerd[1480]: time="2026-04-13T19:22:54.120738840Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Apr 13 19:22:54.123055 systemd[1]: Finished sshkeys.service. Apr 13 19:22:54.197565 containerd[1480]: time="2026-04-13T19:22:54.197484560Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Apr 13 19:22:54.200621 containerd[1480]: time="2026-04-13T19:22:54.200558000Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Apr 13 19:22:54.201150 containerd[1480]: time="2026-04-13T19:22:54.200762240Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Apr 13 19:22:54.201150 containerd[1480]: time="2026-04-13T19:22:54.200790000Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Apr 13 19:22:54.201150 containerd[1480]: time="2026-04-13T19:22:54.200968080Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Apr 13 19:22:54.201150 containerd[1480]: time="2026-04-13T19:22:54.200987320Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Apr 13 19:22:54.201150 containerd[1480]: time="2026-04-13T19:22:54.201062120Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Apr 13 19:22:54.201150 containerd[1480]: time="2026-04-13T19:22:54.201075600Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Apr 13 19:22:54.201592 containerd[1480]: time="2026-04-13T19:22:54.201435880Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 13 19:22:54.201732 containerd[1480]: time="2026-04-13T19:22:54.201703360Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Apr 13 19:22:54.201826 containerd[1480]: time="2026-04-13T19:22:54.201810800Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Apr 13 19:22:54.201879 containerd[1480]: time="2026-04-13T19:22:54.201867520Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Apr 13 19:22:54.202042 containerd[1480]: time="2026-04-13T19:22:54.202020000Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Apr 13 19:22:54.202794 containerd[1480]: time="2026-04-13T19:22:54.202759640Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Apr 13 19:22:54.205186 containerd[1480]: time="2026-04-13T19:22:54.204718240Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 13 19:22:54.205186 containerd[1480]: time="2026-04-13T19:22:54.204747800Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Apr 13 19:22:54.205186 containerd[1480]: time="2026-04-13T19:22:54.204862120Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Apr 13 19:22:54.205186 containerd[1480]: time="2026-04-13T19:22:54.204908640Z" level=info msg="metadata content store policy set" policy=shared Apr 13 19:22:54.212219 containerd[1480]: time="2026-04-13T19:22:54.211970840Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Apr 13 19:22:54.212219 containerd[1480]: time="2026-04-13T19:22:54.212048760Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Apr 13 19:22:54.212219 containerd[1480]: time="2026-04-13T19:22:54.212068800Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Apr 13 19:22:54.212219 containerd[1480]: time="2026-04-13T19:22:54.212087440Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Apr 13 19:22:54.212219 containerd[1480]: time="2026-04-13T19:22:54.212102280Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Apr 13 19:22:54.214499 containerd[1480]: time="2026-04-13T19:22:54.212531760Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Apr 13 19:22:54.214499 containerd[1480]: time="2026-04-13T19:22:54.212791360Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Apr 13 19:22:54.214499 containerd[1480]: time="2026-04-13T19:22:54.212916720Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Apr 13 19:22:54.214499 containerd[1480]: time="2026-04-13T19:22:54.212934560Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Apr 13 19:22:54.214499 containerd[1480]: time="2026-04-13T19:22:54.212950280Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Apr 13 19:22:54.214499 containerd[1480]: time="2026-04-13T19:22:54.212964920Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Apr 13 19:22:54.214499 containerd[1480]: time="2026-04-13T19:22:54.212978840Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Apr 13 19:22:54.214499 containerd[1480]: time="2026-04-13T19:22:54.212992920Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Apr 13 19:22:54.214499 containerd[1480]: time="2026-04-13T19:22:54.213006840Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Apr 13 19:22:54.214499 containerd[1480]: time="2026-04-13T19:22:54.213021520Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Apr 13 19:22:54.214499 containerd[1480]: time="2026-04-13T19:22:54.213036400Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Apr 13 19:22:54.214499 containerd[1480]: time="2026-04-13T19:22:54.213049080Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Apr 13 19:22:54.214499 containerd[1480]: time="2026-04-13T19:22:54.213061480Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Apr 13 19:22:54.214499 containerd[1480]: time="2026-04-13T19:22:54.213082760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Apr 13 19:22:54.214829 containerd[1480]: time="2026-04-13T19:22:54.213103880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Apr 13 19:22:54.214829 containerd[1480]: time="2026-04-13T19:22:54.213116320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Apr 13 19:22:54.214829 containerd[1480]: time="2026-04-13T19:22:54.213129680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Apr 13 19:22:54.214829 containerd[1480]: time="2026-04-13T19:22:54.213143440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Apr 13 19:22:54.214829 containerd[1480]: time="2026-04-13T19:22:54.213157000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Apr 13 19:22:54.214829 containerd[1480]: time="2026-04-13T19:22:54.213169040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Apr 13 19:22:54.214829 containerd[1480]: time="2026-04-13T19:22:54.213182600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Apr 13 19:22:54.214829 containerd[1480]: time="2026-04-13T19:22:54.213195840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Apr 13 19:22:54.214829 containerd[1480]: time="2026-04-13T19:22:54.213217960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Apr 13 19:22:54.214829 containerd[1480]: time="2026-04-13T19:22:54.213230120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Apr 13 19:22:54.214829 containerd[1480]: time="2026-04-13T19:22:54.213241520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Apr 13 19:22:54.214829 containerd[1480]: time="2026-04-13T19:22:54.213257240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Apr 13 19:22:54.214829 containerd[1480]: time="2026-04-13T19:22:54.213277360Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Apr 13 19:22:54.214829 containerd[1480]: time="2026-04-13T19:22:54.213297920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Apr 13 19:22:54.214829 containerd[1480]: time="2026-04-13T19:22:54.213310480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Apr 13 19:22:54.215077 containerd[1480]: time="2026-04-13T19:22:54.213321840Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Apr 13 19:22:54.215077 containerd[1480]: time="2026-04-13T19:22:54.213450080Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Apr 13 19:22:54.215077 containerd[1480]: time="2026-04-13T19:22:54.213486640Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Apr 13 19:22:54.215077 containerd[1480]: time="2026-04-13T19:22:54.213498120Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Apr 13 19:22:54.215077 containerd[1480]: time="2026-04-13T19:22:54.213510840Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Apr 13 19:22:54.215077 containerd[1480]: time="2026-04-13T19:22:54.213520360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Apr 13 19:22:54.215077 containerd[1480]: time="2026-04-13T19:22:54.213533000Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Apr 13 19:22:54.215077 containerd[1480]: time="2026-04-13T19:22:54.213544920Z" level=info msg="NRI interface is disabled by configuration." Apr 13 19:22:54.215077 containerd[1480]: time="2026-04-13T19:22:54.213555680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Apr 13 19:22:54.215222 containerd[1480]: time="2026-04-13T19:22:54.213934960Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Apr 13 19:22:54.215222 containerd[1480]: time="2026-04-13T19:22:54.213997240Z" level=info msg="Connect containerd service" Apr 13 19:22:54.215222 containerd[1480]: time="2026-04-13T19:22:54.214025920Z" level=info msg="using legacy CRI server" Apr 13 19:22:54.215222 containerd[1480]: time="2026-04-13T19:22:54.214032480Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Apr 13 19:22:54.215222 containerd[1480]: time="2026-04-13T19:22:54.214123080Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Apr 13 19:22:54.218994 containerd[1480]: time="2026-04-13T19:22:54.218906640Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 13 19:22:54.219416 containerd[1480]: time="2026-04-13T19:22:54.219309440Z" level=info msg="Start subscribing containerd event" Apr 13 19:22:54.219416 containerd[1480]: time="2026-04-13T19:22:54.219386040Z" level=info msg="Start recovering state" Apr 13 19:22:54.219501 containerd[1480]: time="2026-04-13T19:22:54.219484720Z" level=info msg="Start event monitor" Apr 13 19:22:54.219530 containerd[1480]: time="2026-04-13T19:22:54.219500360Z" level=info msg="Start snapshots syncer" Apr 13 19:22:54.219530 containerd[1480]: time="2026-04-13T19:22:54.219511720Z" level=info msg="Start cni network conf syncer for default" Apr 13 19:22:54.219530 containerd[1480]: time="2026-04-13T19:22:54.219520800Z" level=info msg="Start streaming server" Apr 13 19:22:54.219881 containerd[1480]: time="2026-04-13T19:22:54.219861600Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 13 19:22:54.223825 containerd[1480]: time="2026-04-13T19:22:54.221607800Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 13 19:22:54.221816 systemd[1]: Started containerd.service - containerd container runtime. Apr 13 19:22:54.226204 containerd[1480]: time="2026-04-13T19:22:54.226134000Z" level=info msg="containerd successfully booted in 0.109745s" Apr 13 19:22:54.288650 systemd-networkd[1371]: eth1: Gained IPv6LL Apr 13 19:22:54.290176 systemd-timesyncd[1397]: Network configuration changed, trying to establish connection. Apr 13 19:22:54.293535 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Apr 13 19:22:54.297101 systemd[1]: Reached target network-online.target - Network is Online. Apr 13 19:22:54.307718 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 19:22:54.311247 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Apr 13 19:22:54.362524 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Apr 13 19:22:54.501683 tar[1472]: linux-arm64/README.md Apr 13 19:22:54.526508 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Apr 13 19:22:54.928595 systemd-networkd[1371]: eth0: Gained IPv6LL Apr 13 19:22:54.929127 systemd-timesyncd[1397]: Network configuration changed, trying to establish connection. Apr 13 19:22:55.127811 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 19:22:55.131992 (kubelet)[1559]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 13 19:22:55.323219 sshd_keygen[1483]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 13 19:22:55.351711 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Apr 13 19:22:55.360873 systemd[1]: Starting issuegen.service - Generate /run/issue... Apr 13 19:22:55.371328 systemd[1]: issuegen.service: Deactivated successfully. Apr 13 19:22:55.372181 systemd[1]: Finished issuegen.service - Generate /run/issue. Apr 13 19:22:55.381199 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Apr 13 19:22:55.392152 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Apr 13 19:22:55.402228 systemd[1]: Started getty@tty1.service - Getty on tty1. Apr 13 19:22:55.414648 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Apr 13 19:22:55.416542 systemd[1]: Reached target getty.target - Login Prompts. Apr 13 19:22:55.417740 systemd[1]: Reached target multi-user.target - Multi-User System. Apr 13 19:22:55.422572 systemd[1]: Startup finished in 825ms (kernel) + 11.070s (initrd) + 4.693s (userspace) = 16.590s. Apr 13 19:22:55.620024 kubelet[1559]: E0413 19:22:55.619861 1559 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 13 19:22:55.623820 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 13 19:22:55.623996 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 13 19:23:05.874610 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 13 19:23:05.880815 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 19:23:06.019598 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 19:23:06.038243 (kubelet)[1595]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 13 19:23:06.091791 kubelet[1595]: E0413 19:23:06.091713 1595 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 13 19:23:06.096769 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 13 19:23:06.097115 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 13 19:23:16.347869 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Apr 13 19:23:16.356808 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 19:23:16.480656 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 19:23:16.494397 (kubelet)[1611]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 13 19:23:16.545918 kubelet[1611]: E0413 19:23:16.545788 1611 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 13 19:23:16.549279 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 13 19:23:16.549567 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 13 19:23:22.067088 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Apr 13 19:23:22.073888 systemd[1]: Started sshd@0-49.13.49.84:22-50.85.169.122:41856.service - OpenSSH per-connection server daemon (50.85.169.122:41856). Apr 13 19:23:22.202996 sshd[1620]: Accepted publickey for core from 50.85.169.122 port 41856 ssh2: RSA SHA256:iZ69s7jdfZeZWl77uzTdj7kYKrt9+aDLOIz6i/Hnoms Apr 13 19:23:22.204972 sshd[1620]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:23:22.216560 systemd-logind[1458]: New session 1 of user core. Apr 13 19:23:22.219073 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Apr 13 19:23:22.231634 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Apr 13 19:23:22.247928 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Apr 13 19:23:22.258938 systemd[1]: Starting user@500.service - User Manager for UID 500... Apr 13 19:23:22.263258 (systemd)[1624]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Apr 13 19:23:22.376858 systemd[1624]: Queued start job for default target default.target. Apr 13 19:23:22.385743 systemd[1624]: Created slice app.slice - User Application Slice. Apr 13 19:23:22.385946 systemd[1624]: Reached target paths.target - Paths. Apr 13 19:23:22.386049 systemd[1624]: Reached target timers.target - Timers. Apr 13 19:23:22.387833 systemd[1624]: Starting dbus.socket - D-Bus User Message Bus Socket... Apr 13 19:23:22.404175 systemd[1624]: Listening on dbus.socket - D-Bus User Message Bus Socket. Apr 13 19:23:22.404314 systemd[1624]: Reached target sockets.target - Sockets. Apr 13 19:23:22.404330 systemd[1624]: Reached target basic.target - Basic System. Apr 13 19:23:22.404391 systemd[1624]: Reached target default.target - Main User Target. Apr 13 19:23:22.404426 systemd[1624]: Startup finished in 134ms. Apr 13 19:23:22.404549 systemd[1]: Started user@500.service - User Manager for UID 500. Apr 13 19:23:22.410849 systemd[1]: Started session-1.scope - Session 1 of User core. Apr 13 19:23:22.530921 systemd[1]: Started sshd@1-49.13.49.84:22-50.85.169.122:41868.service - OpenSSH per-connection server daemon (50.85.169.122:41868). Apr 13 19:23:22.648919 sshd[1635]: Accepted publickey for core from 50.85.169.122 port 41868 ssh2: RSA SHA256:iZ69s7jdfZeZWl77uzTdj7kYKrt9+aDLOIz6i/Hnoms Apr 13 19:23:22.651424 sshd[1635]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:23:22.657333 systemd-logind[1458]: New session 2 of user core. Apr 13 19:23:22.668925 systemd[1]: Started session-2.scope - Session 2 of User core. Apr 13 19:23:22.769628 sshd[1635]: pam_unix(sshd:session): session closed for user core Apr 13 19:23:22.775628 systemd-logind[1458]: Session 2 logged out. Waiting for processes to exit. Apr 13 19:23:22.776661 systemd[1]: sshd@1-49.13.49.84:22-50.85.169.122:41868.service: Deactivated successfully. Apr 13 19:23:22.779229 systemd[1]: session-2.scope: Deactivated successfully. Apr 13 19:23:22.780785 systemd-logind[1458]: Removed session 2. Apr 13 19:23:22.798927 systemd[1]: Started sshd@2-49.13.49.84:22-50.85.169.122:41872.service - OpenSSH per-connection server daemon (50.85.169.122:41872). Apr 13 19:23:22.923342 sshd[1642]: Accepted publickey for core from 50.85.169.122 port 41872 ssh2: RSA SHA256:iZ69s7jdfZeZWl77uzTdj7kYKrt9+aDLOIz6i/Hnoms Apr 13 19:23:22.925812 sshd[1642]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:23:22.931624 systemd-logind[1458]: New session 3 of user core. Apr 13 19:23:22.938836 systemd[1]: Started session-3.scope - Session 3 of User core. Apr 13 19:23:23.039808 sshd[1642]: pam_unix(sshd:session): session closed for user core Apr 13 19:23:23.046614 systemd-logind[1458]: Session 3 logged out. Waiting for processes to exit. Apr 13 19:23:23.047284 systemd[1]: sshd@2-49.13.49.84:22-50.85.169.122:41872.service: Deactivated successfully. Apr 13 19:23:23.049648 systemd[1]: session-3.scope: Deactivated successfully. Apr 13 19:23:23.052396 systemd-logind[1458]: Removed session 3. Apr 13 19:23:23.067926 systemd[1]: Started sshd@3-49.13.49.84:22-50.85.169.122:41876.service - OpenSSH per-connection server daemon (50.85.169.122:41876). Apr 13 19:23:23.206342 sshd[1649]: Accepted publickey for core from 50.85.169.122 port 41876 ssh2: RSA SHA256:iZ69s7jdfZeZWl77uzTdj7kYKrt9+aDLOIz6i/Hnoms Apr 13 19:23:23.209032 sshd[1649]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:23:23.214785 systemd-logind[1458]: New session 4 of user core. Apr 13 19:23:23.221697 systemd[1]: Started session-4.scope - Session 4 of User core. Apr 13 19:23:23.324322 sshd[1649]: pam_unix(sshd:session): session closed for user core Apr 13 19:23:23.330727 systemd[1]: sshd@3-49.13.49.84:22-50.85.169.122:41876.service: Deactivated successfully. Apr 13 19:23:23.333709 systemd[1]: session-4.scope: Deactivated successfully. Apr 13 19:23:23.334936 systemd-logind[1458]: Session 4 logged out. Waiting for processes to exit. Apr 13 19:23:23.348359 systemd-logind[1458]: Removed session 4. Apr 13 19:23:23.354357 systemd[1]: Started sshd@4-49.13.49.84:22-50.85.169.122:41878.service - OpenSSH per-connection server daemon (50.85.169.122:41878). Apr 13 19:23:23.475103 sshd[1656]: Accepted publickey for core from 50.85.169.122 port 41878 ssh2: RSA SHA256:iZ69s7jdfZeZWl77uzTdj7kYKrt9+aDLOIz6i/Hnoms Apr 13 19:23:23.477353 sshd[1656]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:23:23.484525 systemd-logind[1458]: New session 5 of user core. Apr 13 19:23:23.490837 systemd[1]: Started session-5.scope - Session 5 of User core. Apr 13 19:23:23.584597 sudo[1659]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Apr 13 19:23:23.584903 sudo[1659]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 13 19:23:23.609453 sudo[1659]: pam_unix(sudo:session): session closed for user root Apr 13 19:23:23.625742 sshd[1656]: pam_unix(sshd:session): session closed for user core Apr 13 19:23:23.633681 systemd[1]: sshd@4-49.13.49.84:22-50.85.169.122:41878.service: Deactivated successfully. Apr 13 19:23:23.636615 systemd[1]: session-5.scope: Deactivated successfully. Apr 13 19:23:23.639913 systemd-logind[1458]: Session 5 logged out. Waiting for processes to exit. Apr 13 19:23:23.641222 systemd-logind[1458]: Removed session 5. Apr 13 19:23:23.658928 systemd[1]: Started sshd@5-49.13.49.84:22-50.85.169.122:41888.service - OpenSSH per-connection server daemon (50.85.169.122:41888). Apr 13 19:23:23.783921 sshd[1664]: Accepted publickey for core from 50.85.169.122 port 41888 ssh2: RSA SHA256:iZ69s7jdfZeZWl77uzTdj7kYKrt9+aDLOIz6i/Hnoms Apr 13 19:23:23.787024 sshd[1664]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:23:23.793592 systemd-logind[1458]: New session 6 of user core. Apr 13 19:23:23.802809 systemd[1]: Started session-6.scope - Session 6 of User core. Apr 13 19:23:23.890574 sudo[1668]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Apr 13 19:23:23.890889 sudo[1668]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 13 19:23:23.895049 sudo[1668]: pam_unix(sudo:session): session closed for user root Apr 13 19:23:23.901883 sudo[1667]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Apr 13 19:23:23.902179 sudo[1667]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 13 19:23:23.923009 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Apr 13 19:23:23.925237 auditctl[1671]: No rules Apr 13 19:23:23.925870 systemd[1]: audit-rules.service: Deactivated successfully. Apr 13 19:23:23.926067 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Apr 13 19:23:23.933431 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 13 19:23:23.961095 augenrules[1689]: No rules Apr 13 19:23:23.962899 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 13 19:23:23.964158 sudo[1667]: pam_unix(sudo:session): session closed for user root Apr 13 19:23:23.981599 sshd[1664]: pam_unix(sshd:session): session closed for user core Apr 13 19:23:23.987008 systemd[1]: sshd@5-49.13.49.84:22-50.85.169.122:41888.service: Deactivated successfully. Apr 13 19:23:23.990449 systemd[1]: session-6.scope: Deactivated successfully. Apr 13 19:23:23.991969 systemd-logind[1458]: Session 6 logged out. Waiting for processes to exit. Apr 13 19:23:23.993423 systemd-logind[1458]: Removed session 6. Apr 13 19:23:24.004871 systemd[1]: Started sshd@6-49.13.49.84:22-50.85.169.122:41898.service - OpenSSH per-connection server daemon (50.85.169.122:41898). Apr 13 19:23:24.140934 sshd[1697]: Accepted publickey for core from 50.85.169.122 port 41898 ssh2: RSA SHA256:iZ69s7jdfZeZWl77uzTdj7kYKrt9+aDLOIz6i/Hnoms Apr 13 19:23:24.142334 sshd[1697]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:23:24.150028 systemd-logind[1458]: New session 7 of user core. Apr 13 19:23:24.155844 systemd[1]: Started session-7.scope - Session 7 of User core. Apr 13 19:23:24.244211 sudo[1700]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 13 19:23:24.244568 sudo[1700]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 13 19:23:24.565131 systemd[1]: Starting docker.service - Docker Application Container Engine... Apr 13 19:23:24.567583 (dockerd)[1715]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Apr 13 19:23:24.830756 dockerd[1715]: time="2026-04-13T19:23:24.830040680Z" level=info msg="Starting up" Apr 13 19:23:24.923783 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport2700945233-merged.mount: Deactivated successfully. Apr 13 19:23:24.945998 dockerd[1715]: time="2026-04-13T19:23:24.945919640Z" level=info msg="Loading containers: start." Apr 13 19:23:25.052584 kernel: Initializing XFRM netlink socket Apr 13 19:23:25.141414 systemd-networkd[1371]: docker0: Link UP Apr 13 19:23:25.164520 dockerd[1715]: time="2026-04-13T19:23:25.164119840Z" level=info msg="Loading containers: done." Apr 13 19:23:25.181222 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3577162741-merged.mount: Deactivated successfully. Apr 13 19:23:25.184278 dockerd[1715]: time="2026-04-13T19:23:25.183794960Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 13 19:23:25.184278 dockerd[1715]: time="2026-04-13T19:23:25.183931880Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Apr 13 19:23:25.184278 dockerd[1715]: time="2026-04-13T19:23:25.184057680Z" level=info msg="Daemon has completed initialization" Apr 13 19:23:25.227821 dockerd[1715]: time="2026-04-13T19:23:25.226733320Z" level=info msg="API listen on /run/docker.sock" Apr 13 19:23:25.227423 systemd[1]: Started docker.service - Docker Application Container Engine. Apr 13 19:23:24.758479 systemd-resolved[1372]: Clock change detected. Flushing caches. Apr 13 19:23:24.767977 systemd-journald[1129]: Time jumped backwards, rotating. Apr 13 19:23:24.758745 systemd-timesyncd[1397]: Contacted time server 131.234.220.231:123 (2.flatcar.pool.ntp.org). Apr 13 19:23:24.758809 systemd-timesyncd[1397]: Initial clock synchronization to Mon 2026-04-13 19:23:24.758340 UTC. Apr 13 19:23:25.234613 containerd[1480]: time="2026-04-13T19:23:25.234505205Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.35.3\"" Apr 13 19:23:25.774225 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2805325222.mount: Deactivated successfully. Apr 13 19:23:26.079762 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Apr 13 19:23:26.086776 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 19:23:26.263693 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 19:23:26.266148 (kubelet)[1919]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 13 19:23:26.304929 kubelet[1919]: E0413 19:23:26.304865 1919 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 13 19:23:26.308070 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 13 19:23:26.308324 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 13 19:23:26.980157 containerd[1480]: time="2026-04-13T19:23:26.980068405Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.35.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:23:26.981884 containerd[1480]: time="2026-04-13T19:23:26.981834365Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.35.3: active requests=0, bytes read=24595509" Apr 13 19:23:26.983118 containerd[1480]: time="2026-04-13T19:23:26.983069365Z" level=info msg="ImageCreate event name:\"sha256:01372c327c8cb0defbcdf3c4127424368b365ba0f2629d3142a37bb2ea8b93e3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:23:26.987476 containerd[1480]: time="2026-04-13T19:23:26.986815285Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:6c6e2571f98e738015a39ed21305ab4166a3e2873f9cc01d7fa58371cf0f5d30\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:23:26.990857 containerd[1480]: time="2026-04-13T19:23:26.990054805Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.35.3\" with image id \"sha256:01372c327c8cb0defbcdf3c4127424368b365ba0f2629d3142a37bb2ea8b93e3\", repo tag \"registry.k8s.io/kube-apiserver:v1.35.3\", repo digest \"registry.k8s.io/kube-apiserver@sha256:6c6e2571f98e738015a39ed21305ab4166a3e2873f9cc01d7fa58371cf0f5d30\", size \"24592010\" in 1.75547756s" Apr 13 19:23:26.990857 containerd[1480]: time="2026-04-13T19:23:26.990143725Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.35.3\" returns image reference \"sha256:01372c327c8cb0defbcdf3c4127424368b365ba0f2629d3142a37bb2ea8b93e3\"" Apr 13 19:23:26.991658 containerd[1480]: time="2026-04-13T19:23:26.991619965Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.35.3\"" Apr 13 19:23:28.129222 containerd[1480]: time="2026-04-13T19:23:28.129148285Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.35.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:23:28.130940 containerd[1480]: time="2026-04-13T19:23:28.130891285Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.35.3: active requests=0, bytes read=19064115" Apr 13 19:23:28.134352 containerd[1480]: time="2026-04-13T19:23:28.134294485Z" level=info msg="ImageCreate event name:\"sha256:f2119fbf97330b133e2bc6c7d48bd6bee01864df1dd4356e678bfd17e0811be4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:23:28.139824 containerd[1480]: time="2026-04-13T19:23:28.139755685Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:23a24aafa10831eb47477b0b31a525ee8a4a99d2c17251aac46c43be8201ec59\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:23:28.141741 containerd[1480]: time="2026-04-13T19:23:28.141513285Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.35.3\" with image id \"sha256:f2119fbf97330b133e2bc6c7d48bd6bee01864df1dd4356e678bfd17e0811be4\", repo tag \"registry.k8s.io/kube-controller-manager:v1.35.3\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:23a24aafa10831eb47477b0b31a525ee8a4a99d2c17251aac46c43be8201ec59\", size \"20569814\" in 1.14975804s" Apr 13 19:23:28.141741 containerd[1480]: time="2026-04-13T19:23:28.141563685Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.35.3\" returns image reference \"sha256:f2119fbf97330b133e2bc6c7d48bd6bee01864df1dd4356e678bfd17e0811be4\"" Apr 13 19:23:28.142095 containerd[1480]: time="2026-04-13T19:23:28.142060885Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.35.3\"" Apr 13 19:23:29.150851 containerd[1480]: time="2026-04-13T19:23:29.150772685Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.35.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:23:29.152522 containerd[1480]: time="2026-04-13T19:23:29.152472325Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.35.3: active requests=0, bytes read=13797917" Apr 13 19:23:29.153980 containerd[1480]: time="2026-04-13T19:23:29.153624325Z" level=info msg="ImageCreate event name:\"sha256:21d8a9777f9253ddda9144e58b529a621d0819d77dfd08a67a157fe0379efd15\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:23:29.158245 containerd[1480]: time="2026-04-13T19:23:29.158202045Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:7070dff574916315268ab483f1088a107b1f3a8a1a87f3e3645933111ade7013\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:23:29.159712 containerd[1480]: time="2026-04-13T19:23:29.159666045Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.35.3\" with image id \"sha256:21d8a9777f9253ddda9144e58b529a621d0819d77dfd08a67a157fe0379efd15\", repo tag \"registry.k8s.io/kube-scheduler:v1.35.3\", repo digest \"registry.k8s.io/kube-scheduler@sha256:7070dff574916315268ab483f1088a107b1f3a8a1a87f3e3645933111ade7013\", size \"15303634\" in 1.01756152s" Apr 13 19:23:29.159851 containerd[1480]: time="2026-04-13T19:23:29.159832965Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.35.3\" returns image reference \"sha256:21d8a9777f9253ddda9144e58b529a621d0819d77dfd08a67a157fe0379efd15\"" Apr 13 19:23:29.160898 containerd[1480]: time="2026-04-13T19:23:29.160865205Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.35.3\"" Apr 13 19:23:30.053987 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount826412676.mount: Deactivated successfully. Apr 13 19:23:30.297225 containerd[1480]: time="2026-04-13T19:23:30.297173725Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.35.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:23:30.298389 containerd[1480]: time="2026-04-13T19:23:30.298348885Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.35.3: active requests=0, bytes read=22329611" Apr 13 19:23:30.299168 containerd[1480]: time="2026-04-13T19:23:30.299112765Z" level=info msg="ImageCreate event name:\"sha256:e21b1b28c776646ec72252e21482c2e273889e76006df8b76d97d9dd1ed544f6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:23:30.301845 containerd[1480]: time="2026-04-13T19:23:30.301733565Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:8743aec6a360aedcb7a076cbecea367b072abe1bfade2e2098650df502e2bc89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:23:30.302876 containerd[1480]: time="2026-04-13T19:23:30.302721085Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.35.3\" with image id \"sha256:e21b1b28c776646ec72252e21482c2e273889e76006df8b76d97d9dd1ed544f6\", repo tag \"registry.k8s.io/kube-proxy:v1.35.3\", repo digest \"registry.k8s.io/kube-proxy@sha256:8743aec6a360aedcb7a076cbecea367b072abe1bfade2e2098650df502e2bc89\", size \"22328604\" in 1.14181224s" Apr 13 19:23:30.302876 containerd[1480]: time="2026-04-13T19:23:30.302760205Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.35.3\" returns image reference \"sha256:e21b1b28c776646ec72252e21482c2e273889e76006df8b76d97d9dd1ed544f6\"" Apr 13 19:23:30.303730 containerd[1480]: time="2026-04-13T19:23:30.303687205Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.13.1\"" Apr 13 19:23:30.847540 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2708342367.mount: Deactivated successfully. Apr 13 19:23:31.910044 containerd[1480]: time="2026-04-13T19:23:31.909700885Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.13.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:23:31.912624 containerd[1480]: time="2026-04-13T19:23:31.911952965Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.13.1: active requests=0, bytes read=21172309" Apr 13 19:23:31.914507 containerd[1480]: time="2026-04-13T19:23:31.914459925Z" level=info msg="ImageCreate event name:\"sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:23:31.918737 containerd[1480]: time="2026-04-13T19:23:31.918663925Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:23:31.922105 containerd[1480]: time="2026-04-13T19:23:31.921236285Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.13.1\" with image id \"sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf\", repo tag \"registry.k8s.io/coredns/coredns:v1.13.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6\", size \"21168808\" in 1.61738084s" Apr 13 19:23:31.922105 containerd[1480]: time="2026-04-13T19:23:31.921312645Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.13.1\" returns image reference \"sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf\"" Apr 13 19:23:31.923088 containerd[1480]: time="2026-04-13T19:23:31.922757765Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Apr 13 19:23:32.363661 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3894469421.mount: Deactivated successfully. Apr 13 19:23:32.370444 containerd[1480]: time="2026-04-13T19:23:32.369894325Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:23:32.370980 containerd[1480]: time="2026-04-13T19:23:32.370938285Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=268729" Apr 13 19:23:32.373210 containerd[1480]: time="2026-04-13T19:23:32.371722005Z" level=info msg="ImageCreate event name:\"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:23:32.374657 containerd[1480]: time="2026-04-13T19:23:32.374613925Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:23:32.375837 containerd[1480]: time="2026-04-13T19:23:32.375791005Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"267939\" in 452.9764ms" Apr 13 19:23:32.375837 containerd[1480]: time="2026-04-13T19:23:32.375837405Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\"" Apr 13 19:23:32.376653 containerd[1480]: time="2026-04-13T19:23:32.376618565Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.6-0\"" Apr 13 19:23:32.896652 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4241875223.mount: Deactivated successfully. Apr 13 19:23:33.584718 containerd[1480]: time="2026-04-13T19:23:33.584647245Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.6-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:23:33.586748 containerd[1480]: time="2026-04-13T19:23:33.586688965Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.6-0: active requests=0, bytes read=21751802" Apr 13 19:23:33.587356 containerd[1480]: time="2026-04-13T19:23:33.587325765Z" level=info msg="ImageCreate event name:\"sha256:271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:23:33.591699 containerd[1480]: time="2026-04-13T19:23:33.591638485Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:23:33.593939 containerd[1480]: time="2026-04-13T19:23:33.593880085Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.6-0\" with image id \"sha256:271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57\", repo tag \"registry.k8s.io/etcd:3.6.6-0\", repo digest \"registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890\", size \"21749640\" in 1.21722328s" Apr 13 19:23:33.593939 containerd[1480]: time="2026-04-13T19:23:33.593927925Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.6-0\" returns image reference \"sha256:271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57\"" Apr 13 19:23:36.329934 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Apr 13 19:23:36.340979 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 19:23:36.459648 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 19:23:36.464763 (kubelet)[2095]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 13 19:23:36.504153 kubelet[2095]: E0413 19:23:36.504094 2095 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 13 19:23:36.508217 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 13 19:23:36.508578 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 13 19:23:36.727452 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 19:23:36.739849 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 19:23:36.784973 systemd[1]: Reloading requested from client PID 2109 ('systemctl') (unit session-7.scope)... Apr 13 19:23:36.784996 systemd[1]: Reloading... Apr 13 19:23:36.932604 zram_generator::config[2152]: No configuration found. Apr 13 19:23:37.035369 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 13 19:23:37.108255 systemd[1]: Reloading finished in 322 ms. Apr 13 19:23:37.158051 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Apr 13 19:23:37.158151 systemd[1]: kubelet.service: Failed with result 'signal'. Apr 13 19:23:37.158610 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 19:23:37.165062 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 19:23:37.312927 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 19:23:37.323011 (kubelet)[2197]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 13 19:23:37.382060 kubelet[2197]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 13 19:23:37.909810 kubelet[2197]: I0413 19:23:37.909678 2197 server.go:525] "Kubelet version" kubeletVersion="v1.35.1" Apr 13 19:23:37.909810 kubelet[2197]: I0413 19:23:37.909774 2197 server.go:527] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 13 19:23:37.909994 kubelet[2197]: I0413 19:23:37.909831 2197 watchdog_linux.go:95] "Systemd watchdog is not enabled" Apr 13 19:23:37.909994 kubelet[2197]: I0413 19:23:37.909842 2197 watchdog_linux.go:138] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 13 19:23:37.910770 kubelet[2197]: I0413 19:23:37.910721 2197 server.go:951] "Client rotation is on, will bootstrap in background" Apr 13 19:23:37.922555 kubelet[2197]: E0413 19:23:37.922459 2197 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://49.13.49.84:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 49.13.49.84:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 13 19:23:37.922794 kubelet[2197]: I0413 19:23:37.922771 2197 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 13 19:23:37.928448 kubelet[2197]: E0413 19:23:37.926328 2197 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 13 19:23:37.928448 kubelet[2197]: I0413 19:23:37.926437 2197 server.go:1395] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Apr 13 19:23:37.929325 kubelet[2197]: I0413 19:23:37.929293 2197 server.go:775] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Apr 13 19:23:37.930850 kubelet[2197]: I0413 19:23:37.930769 2197 container_manager_linux.go:272] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 13 19:23:37.931089 kubelet[2197]: I0413 19:23:37.930843 2197 container_manager_linux.go:277] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-7-e-ee64700b2a","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 13 19:23:37.931089 kubelet[2197]: I0413 19:23:37.931081 2197 topology_manager.go:143] "Creating topology manager with none policy" Apr 13 19:23:37.931208 kubelet[2197]: I0413 19:23:37.931092 2197 container_manager_linux.go:308] "Creating device plugin manager" Apr 13 19:23:37.931244 kubelet[2197]: I0413 19:23:37.931228 2197 container_manager_linux.go:317] "Creating Dynamic Resource Allocation (DRA) manager" Apr 13 19:23:37.938310 kubelet[2197]: I0413 19:23:37.938233 2197 state_mem.go:41] "Initialized" logger="CPUManager state memory" Apr 13 19:23:37.940456 kubelet[2197]: I0413 19:23:37.938700 2197 kubelet.go:482] "Attempting to sync node with API server" Apr 13 19:23:37.940456 kubelet[2197]: I0413 19:23:37.938741 2197 kubelet.go:383] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 13 19:23:37.940456 kubelet[2197]: I0413 19:23:37.938775 2197 kubelet.go:394] "Adding apiserver pod source" Apr 13 19:23:37.940456 kubelet[2197]: I0413 19:23:37.938795 2197 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 13 19:23:37.945652 kubelet[2197]: I0413 19:23:37.945596 2197 kuberuntime_manager.go:294] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 13 19:23:37.947082 kubelet[2197]: I0413 19:23:37.947038 2197 kubelet.go:943] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 13 19:23:37.947082 kubelet[2197]: I0413 19:23:37.947086 2197 kubelet.go:970] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Apr 13 19:23:37.947224 kubelet[2197]: W0413 19:23:37.947144 2197 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 13 19:23:37.950501 kubelet[2197]: I0413 19:23:37.950460 2197 server.go:1257] "Started kubelet" Apr 13 19:23:37.956788 kubelet[2197]: I0413 19:23:37.956726 2197 fs_resource_analyzer.go:69] "Starting FS ResourceAnalyzer" Apr 13 19:23:37.966226 kubelet[2197]: I0413 19:23:37.966192 2197 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 13 19:23:37.978076 kubelet[2197]: I0413 19:23:37.968825 2197 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Apr 13 19:23:37.978247 kubelet[2197]: I0413 19:23:37.968872 2197 server.go:182] "Starting to listen" address="0.0.0.0" port=10250 Apr 13 19:23:37.979991 kubelet[2197]: I0413 19:23:37.979960 2197 server.go:317] "Adding debug handlers to kubelet server" Apr 13 19:23:37.981254 kubelet[2197]: E0413 19:23:37.969166 2197 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"ci-4081-3-7-e-ee64700b2a\" not found" Apr 13 19:23:37.981357 kubelet[2197]: I0413 19:23:37.973810 2197 volume_manager.go:311] "Starting Kubelet Volume Manager" Apr 13 19:23:37.982239 kubelet[2197]: I0413 19:23:37.969052 2197 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 13 19:23:37.982433 kubelet[2197]: I0413 19:23:37.982390 2197 server_v1.go:49] "podresources" method="list" useActivePods=true Apr 13 19:23:37.984078 kubelet[2197]: I0413 19:23:37.984035 2197 server.go:254] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 13 19:23:37.984312 kubelet[2197]: I0413 19:23:37.976969 2197 factory.go:223] Registration of the systemd container factory successfully Apr 13 19:23:37.984609 kubelet[2197]: I0413 19:23:37.984578 2197 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 13 19:23:37.985124 kubelet[2197]: E0413 19:23:37.973929 2197 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://49.13.49.84:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-7-e-ee64700b2a?timeout=10s\": dial tcp 49.13.49.84:6443: connect: connection refused" interval="200ms" Apr 13 19:23:37.985535 kubelet[2197]: I0413 19:23:37.985513 2197 reconciler.go:29] "Reconciler: start to sync state" Apr 13 19:23:37.985661 kubelet[2197]: E0413 19:23:37.974639 2197 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://49.13.49.84:6443/api/v1/namespaces/default/events\": dial tcp 49.13.49.84:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-3-7-e-ee64700b2a.18a601039594f7c5 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-7-e-ee64700b2a,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-3-7-e-ee64700b2a,},FirstTimestamp:2026-04-13 19:23:37.950336965 +0000 UTC m=+0.623062561,LastTimestamp:2026-04-13 19:23:37.950336965 +0000 UTC m=+0.623062561,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-7-e-ee64700b2a,}" Apr 13 19:23:37.988278 kubelet[2197]: E0413 19:23:37.988242 2197 kubelet.go:1656] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 13 19:23:37.988658 kubelet[2197]: I0413 19:23:37.988640 2197 factory.go:223] Registration of the containerd container factory successfully Apr 13 19:23:37.993038 kubelet[2197]: I0413 19:23:37.992997 2197 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Apr 13 19:23:37.994534 kubelet[2197]: I0413 19:23:37.994494 2197 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Apr 13 19:23:37.994534 kubelet[2197]: I0413 19:23:37.994526 2197 status_manager.go:249] "Starting to sync pod status with apiserver" Apr 13 19:23:37.994667 kubelet[2197]: I0413 19:23:37.994549 2197 kubelet.go:2501] "Starting kubelet main sync loop" Apr 13 19:23:37.994667 kubelet[2197]: E0413 19:23:37.994606 2197 kubelet.go:2525] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 13 19:23:38.024270 kubelet[2197]: I0413 19:23:38.024232 2197 cpu_manager.go:225] "Starting" policy="none" Apr 13 19:23:38.024270 kubelet[2197]: I0413 19:23:38.024256 2197 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Apr 13 19:23:38.024270 kubelet[2197]: I0413 19:23:38.024288 2197 state_mem.go:41] "Initialized" logger="CPUManager state checkpoint.CPUManager state memory" Apr 13 19:23:38.027868 kubelet[2197]: I0413 19:23:38.027830 2197 policy_none.go:50] "Start" Apr 13 19:23:38.027868 kubelet[2197]: I0413 19:23:38.027861 2197 memory_manager.go:187] "Starting memorymanager" policy="None" Apr 13 19:23:38.027868 kubelet[2197]: I0413 19:23:38.027876 2197 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Apr 13 19:23:38.030076 kubelet[2197]: I0413 19:23:38.030048 2197 policy_none.go:44] "Start" Apr 13 19:23:38.034917 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Apr 13 19:23:38.053230 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Apr 13 19:23:38.057147 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Apr 13 19:23:38.066721 kubelet[2197]: E0413 19:23:38.066675 2197 manager.go:525] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 13 19:23:38.067191 kubelet[2197]: I0413 19:23:38.067166 2197 eviction_manager.go:194] "Eviction manager: starting control loop" Apr 13 19:23:38.067339 kubelet[2197]: I0413 19:23:38.067288 2197 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 13 19:23:38.068939 kubelet[2197]: I0413 19:23:38.068543 2197 plugin_manager.go:121] "Starting Kubelet Plugin Manager" Apr 13 19:23:38.070629 kubelet[2197]: E0413 19:23:38.070598 2197 eviction_manager.go:272] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 13 19:23:38.070785 kubelet[2197]: E0413 19:23:38.070653 2197 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081-3-7-e-ee64700b2a\" not found" Apr 13 19:23:38.113038 systemd[1]: Created slice kubepods-burstable-pod1e8e46e5c2fd04709c94e35be0a3c28f.slice - libcontainer container kubepods-burstable-pod1e8e46e5c2fd04709c94e35be0a3c28f.slice. Apr 13 19:23:38.125042 kubelet[2197]: E0413 19:23:38.124978 2197 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-7-e-ee64700b2a\" not found" node="ci-4081-3-7-e-ee64700b2a" Apr 13 19:23:38.130366 systemd[1]: Created slice kubepods-burstable-pod1c9ce81089f870804dbc347e665d74e3.slice - libcontainer container kubepods-burstable-pod1c9ce81089f870804dbc347e665d74e3.slice. Apr 13 19:23:38.150577 kubelet[2197]: E0413 19:23:38.150477 2197 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-7-e-ee64700b2a\" not found" node="ci-4081-3-7-e-ee64700b2a" Apr 13 19:23:38.155075 systemd[1]: Created slice kubepods-burstable-pod752bf0cb5685baafe8db7935b18bfb91.slice - libcontainer container kubepods-burstable-pod752bf0cb5685baafe8db7935b18bfb91.slice. Apr 13 19:23:38.158084 kubelet[2197]: E0413 19:23:38.157826 2197 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-7-e-ee64700b2a\" not found" node="ci-4081-3-7-e-ee64700b2a" Apr 13 19:23:38.170262 kubelet[2197]: I0413 19:23:38.170093 2197 kubelet_node_status.go:74] "Attempting to register node" node="ci-4081-3-7-e-ee64700b2a" Apr 13 19:23:38.171672 kubelet[2197]: E0413 19:23:38.171588 2197 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://49.13.49.84:6443/api/v1/nodes\": dial tcp 49.13.49.84:6443: connect: connection refused" node="ci-4081-3-7-e-ee64700b2a" Apr 13 19:23:38.186434 kubelet[2197]: I0413 19:23:38.186279 2197 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1c9ce81089f870804dbc347e665d74e3-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-7-e-ee64700b2a\" (UID: \"1c9ce81089f870804dbc347e665d74e3\") " pod="kube-system/kube-controller-manager-ci-4081-3-7-e-ee64700b2a" Apr 13 19:23:38.186434 kubelet[2197]: I0413 19:23:38.186346 2197 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1c9ce81089f870804dbc347e665d74e3-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-7-e-ee64700b2a\" (UID: \"1c9ce81089f870804dbc347e665d74e3\") " pod="kube-system/kube-controller-manager-ci-4081-3-7-e-ee64700b2a" Apr 13 19:23:38.186860 kubelet[2197]: I0413 19:23:38.186548 2197 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1e8e46e5c2fd04709c94e35be0a3c28f-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-7-e-ee64700b2a\" (UID: \"1e8e46e5c2fd04709c94e35be0a3c28f\") " pod="kube-system/kube-apiserver-ci-4081-3-7-e-ee64700b2a" Apr 13 19:23:38.186860 kubelet[2197]: I0413 19:23:38.186582 2197 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1c9ce81089f870804dbc347e665d74e3-ca-certs\") pod \"kube-controller-manager-ci-4081-3-7-e-ee64700b2a\" (UID: \"1c9ce81089f870804dbc347e665d74e3\") " pod="kube-system/kube-controller-manager-ci-4081-3-7-e-ee64700b2a" Apr 13 19:23:38.186860 kubelet[2197]: I0413 19:23:38.186612 2197 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1c9ce81089f870804dbc347e665d74e3-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-7-e-ee64700b2a\" (UID: \"1c9ce81089f870804dbc347e665d74e3\") " pod="kube-system/kube-controller-manager-ci-4081-3-7-e-ee64700b2a" Apr 13 19:23:38.186860 kubelet[2197]: I0413 19:23:38.186635 2197 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/752bf0cb5685baafe8db7935b18bfb91-kubeconfig\") pod \"kube-scheduler-ci-4081-3-7-e-ee64700b2a\" (UID: \"752bf0cb5685baafe8db7935b18bfb91\") " pod="kube-system/kube-scheduler-ci-4081-3-7-e-ee64700b2a" Apr 13 19:23:38.186860 kubelet[2197]: I0413 19:23:38.186657 2197 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1e8e46e5c2fd04709c94e35be0a3c28f-ca-certs\") pod \"kube-apiserver-ci-4081-3-7-e-ee64700b2a\" (UID: \"1e8e46e5c2fd04709c94e35be0a3c28f\") " pod="kube-system/kube-apiserver-ci-4081-3-7-e-ee64700b2a" Apr 13 19:23:38.187094 kubelet[2197]: I0413 19:23:38.186679 2197 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1e8e46e5c2fd04709c94e35be0a3c28f-k8s-certs\") pod \"kube-apiserver-ci-4081-3-7-e-ee64700b2a\" (UID: \"1e8e46e5c2fd04709c94e35be0a3c28f\") " pod="kube-system/kube-apiserver-ci-4081-3-7-e-ee64700b2a" Apr 13 19:23:38.187094 kubelet[2197]: I0413 19:23:38.186707 2197 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/1c9ce81089f870804dbc347e665d74e3-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-7-e-ee64700b2a\" (UID: \"1c9ce81089f870804dbc347e665d74e3\") " pod="kube-system/kube-controller-manager-ci-4081-3-7-e-ee64700b2a" Apr 13 19:23:38.187826 kubelet[2197]: E0413 19:23:38.187732 2197 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://49.13.49.84:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-7-e-ee64700b2a?timeout=10s\": dial tcp 49.13.49.84:6443: connect: connection refused" interval="400ms" Apr 13 19:23:38.359263 update_engine[1461]: I20260413 19:23:38.358464 1461 update_attempter.cc:509] Updating boot flags... Apr 13 19:23:38.375066 kubelet[2197]: I0413 19:23:38.374654 2197 kubelet_node_status.go:74] "Attempting to register node" node="ci-4081-3-7-e-ee64700b2a" Apr 13 19:23:38.375066 kubelet[2197]: E0413 19:23:38.374977 2197 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://49.13.49.84:6443/api/v1/nodes\": dial tcp 49.13.49.84:6443: connect: connection refused" node="ci-4081-3-7-e-ee64700b2a" Apr 13 19:23:38.407740 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 33 scanned by (udev-worker) (2240) Apr 13 19:23:38.447788 containerd[1480]: time="2026-04-13T19:23:38.445750245Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-7-e-ee64700b2a,Uid:1e8e46e5c2fd04709c94e35be0a3c28f,Namespace:kube-system,Attempt:0,}" Apr 13 19:23:38.454939 containerd[1480]: time="2026-04-13T19:23:38.454442485Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-7-e-ee64700b2a,Uid:1c9ce81089f870804dbc347e665d74e3,Namespace:kube-system,Attempt:0,}" Apr 13 19:23:38.460637 containerd[1480]: time="2026-04-13T19:23:38.460594245Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-7-e-ee64700b2a,Uid:752bf0cb5685baafe8db7935b18bfb91,Namespace:kube-system,Attempt:0,}" Apr 13 19:23:38.589319 kubelet[2197]: E0413 19:23:38.589232 2197 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://49.13.49.84:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-7-e-ee64700b2a?timeout=10s\": dial tcp 49.13.49.84:6443: connect: connection refused" interval="800ms" Apr 13 19:23:38.778057 kubelet[2197]: I0413 19:23:38.777335 2197 kubelet_node_status.go:74] "Attempting to register node" node="ci-4081-3-7-e-ee64700b2a" Apr 13 19:23:38.778827 kubelet[2197]: E0413 19:23:38.778349 2197 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://49.13.49.84:6443/api/v1/nodes\": dial tcp 49.13.49.84:6443: connect: connection refused" node="ci-4081-3-7-e-ee64700b2a" Apr 13 19:23:38.919841 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount786833356.mount: Deactivated successfully. Apr 13 19:23:38.931328 containerd[1480]: time="2026-04-13T19:23:38.931248205Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 13 19:23:38.933067 containerd[1480]: time="2026-04-13T19:23:38.932717085Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 13 19:23:38.934270 containerd[1480]: time="2026-04-13T19:23:38.934220845Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269193" Apr 13 19:23:38.935119 containerd[1480]: time="2026-04-13T19:23:38.935016125Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 13 19:23:38.937691 containerd[1480]: time="2026-04-13T19:23:38.936836045Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 13 19:23:38.938796 containerd[1480]: time="2026-04-13T19:23:38.938708205Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 13 19:23:38.939832 containerd[1480]: time="2026-04-13T19:23:38.939758165Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 13 19:23:38.944577 containerd[1480]: time="2026-04-13T19:23:38.944515885Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 13 19:23:38.945848 containerd[1480]: time="2026-04-13T19:23:38.945614205Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 491.0316ms" Apr 13 19:23:38.948786 containerd[1480]: time="2026-04-13T19:23:38.948682205Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 487.72564ms" Apr 13 19:23:38.949535 containerd[1480]: time="2026-04-13T19:23:38.949499325Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 503.66396ms" Apr 13 19:23:39.104649 containerd[1480]: time="2026-04-13T19:23:39.104047445Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 19:23:39.104649 containerd[1480]: time="2026-04-13T19:23:39.104110205Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 19:23:39.104649 containerd[1480]: time="2026-04-13T19:23:39.104127485Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:23:39.105293 containerd[1480]: time="2026-04-13T19:23:39.105092285Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:23:39.108270 containerd[1480]: time="2026-04-13T19:23:39.107881925Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 19:23:39.108270 containerd[1480]: time="2026-04-13T19:23:39.108011485Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 19:23:39.108270 containerd[1480]: time="2026-04-13T19:23:39.108029685Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:23:39.108270 containerd[1480]: time="2026-04-13T19:23:39.108130605Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:23:39.113930 containerd[1480]: time="2026-04-13T19:23:39.113668365Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 19:23:39.113930 containerd[1480]: time="2026-04-13T19:23:39.113732925Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 19:23:39.113930 containerd[1480]: time="2026-04-13T19:23:39.113748045Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:23:39.113930 containerd[1480]: time="2026-04-13T19:23:39.113832165Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:23:39.136800 systemd[1]: Started cri-containerd-dc8f426915b763996ce0410d616f393932814d4c35c6845eb87789ea73910008.scope - libcontainer container dc8f426915b763996ce0410d616f393932814d4c35c6845eb87789ea73910008. Apr 13 19:23:39.143590 systemd[1]: Started cri-containerd-b94e9091e599e98aa80a46d595e7f9dfcd6a1db0b202c403f6dcc238709c01a8.scope - libcontainer container b94e9091e599e98aa80a46d595e7f9dfcd6a1db0b202c403f6dcc238709c01a8. Apr 13 19:23:39.160648 systemd[1]: Started cri-containerd-17c10abbc8b106b99a5305802f96f3ed9059a3e52e5666c6409d8cb7881b8eba.scope - libcontainer container 17c10abbc8b106b99a5305802f96f3ed9059a3e52e5666c6409d8cb7881b8eba. Apr 13 19:23:39.200421 containerd[1480]: time="2026-04-13T19:23:39.200345645Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-7-e-ee64700b2a,Uid:1c9ce81089f870804dbc347e665d74e3,Namespace:kube-system,Attempt:0,} returns sandbox id \"dc8f426915b763996ce0410d616f393932814d4c35c6845eb87789ea73910008\"" Apr 13 19:23:39.214244 containerd[1480]: time="2026-04-13T19:23:39.214191725Z" level=info msg="CreateContainer within sandbox \"dc8f426915b763996ce0410d616f393932814d4c35c6845eb87789ea73910008\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Apr 13 19:23:39.229774 containerd[1480]: time="2026-04-13T19:23:39.229225565Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-7-e-ee64700b2a,Uid:1e8e46e5c2fd04709c94e35be0a3c28f,Namespace:kube-system,Attempt:0,} returns sandbox id \"17c10abbc8b106b99a5305802f96f3ed9059a3e52e5666c6409d8cb7881b8eba\"" Apr 13 19:23:39.233674 containerd[1480]: time="2026-04-13T19:23:39.233631725Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-7-e-ee64700b2a,Uid:752bf0cb5685baafe8db7935b18bfb91,Namespace:kube-system,Attempt:0,} returns sandbox id \"b94e9091e599e98aa80a46d595e7f9dfcd6a1db0b202c403f6dcc238709c01a8\"" Apr 13 19:23:39.234871 containerd[1480]: time="2026-04-13T19:23:39.234739125Z" level=info msg="CreateContainer within sandbox \"dc8f426915b763996ce0410d616f393932814d4c35c6845eb87789ea73910008\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"78a9d086305e1dfbbc140ac25883a89a7a76de520c9ef1212c1bb1bdad4463cd\"" Apr 13 19:23:39.236602 containerd[1480]: time="2026-04-13T19:23:39.236570045Z" level=info msg="StartContainer for \"78a9d086305e1dfbbc140ac25883a89a7a76de520c9ef1212c1bb1bdad4463cd\"" Apr 13 19:23:39.239767 containerd[1480]: time="2026-04-13T19:23:39.239728965Z" level=info msg="CreateContainer within sandbox \"17c10abbc8b106b99a5305802f96f3ed9059a3e52e5666c6409d8cb7881b8eba\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Apr 13 19:23:39.242780 containerd[1480]: time="2026-04-13T19:23:39.242616525Z" level=info msg="CreateContainer within sandbox \"b94e9091e599e98aa80a46d595e7f9dfcd6a1db0b202c403f6dcc238709c01a8\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Apr 13 19:23:39.263609 containerd[1480]: time="2026-04-13T19:23:39.263559965Z" level=info msg="CreateContainer within sandbox \"17c10abbc8b106b99a5305802f96f3ed9059a3e52e5666c6409d8cb7881b8eba\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"f3cf84a58447e2bdb6eb6fc6d78d13f253384fe4348968f1092ded5783a208ca\"" Apr 13 19:23:39.266064 containerd[1480]: time="2026-04-13T19:23:39.264742485Z" level=info msg="StartContainer for \"f3cf84a58447e2bdb6eb6fc6d78d13f253384fe4348968f1092ded5783a208ca\"" Apr 13 19:23:39.272980 containerd[1480]: time="2026-04-13T19:23:39.272927405Z" level=info msg="CreateContainer within sandbox \"b94e9091e599e98aa80a46d595e7f9dfcd6a1db0b202c403f6dcc238709c01a8\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"f4ab2ba652b56fcb729577822628266e2ad2ff67c05ff97a0b5004b10defbee9\"" Apr 13 19:23:39.273996 containerd[1480]: time="2026-04-13T19:23:39.273931685Z" level=info msg="StartContainer for \"f4ab2ba652b56fcb729577822628266e2ad2ff67c05ff97a0b5004b10defbee9\"" Apr 13 19:23:39.276646 systemd[1]: Started cri-containerd-78a9d086305e1dfbbc140ac25883a89a7a76de520c9ef1212c1bb1bdad4463cd.scope - libcontainer container 78a9d086305e1dfbbc140ac25883a89a7a76de520c9ef1212c1bb1bdad4463cd. Apr 13 19:23:39.311232 systemd[1]: Started cri-containerd-f3cf84a58447e2bdb6eb6fc6d78d13f253384fe4348968f1092ded5783a208ca.scope - libcontainer container f3cf84a58447e2bdb6eb6fc6d78d13f253384fe4348968f1092ded5783a208ca. Apr 13 19:23:39.322128 systemd[1]: Started cri-containerd-f4ab2ba652b56fcb729577822628266e2ad2ff67c05ff97a0b5004b10defbee9.scope - libcontainer container f4ab2ba652b56fcb729577822628266e2ad2ff67c05ff97a0b5004b10defbee9. Apr 13 19:23:39.338448 containerd[1480]: time="2026-04-13T19:23:39.338341845Z" level=info msg="StartContainer for \"78a9d086305e1dfbbc140ac25883a89a7a76de520c9ef1212c1bb1bdad4463cd\" returns successfully" Apr 13 19:23:39.373979 containerd[1480]: time="2026-04-13T19:23:39.373725965Z" level=info msg="StartContainer for \"f3cf84a58447e2bdb6eb6fc6d78d13f253384fe4348968f1092ded5783a208ca\" returns successfully" Apr 13 19:23:39.390513 kubelet[2197]: E0413 19:23:39.390464 2197 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://49.13.49.84:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-7-e-ee64700b2a?timeout=10s\": dial tcp 49.13.49.84:6443: connect: connection refused" interval="1.6s" Apr 13 19:23:39.409147 containerd[1480]: time="2026-04-13T19:23:39.409102685Z" level=info msg="StartContainer for \"f4ab2ba652b56fcb729577822628266e2ad2ff67c05ff97a0b5004b10defbee9\" returns successfully" Apr 13 19:23:39.584459 kubelet[2197]: I0413 19:23:39.582123 2197 kubelet_node_status.go:74] "Attempting to register node" node="ci-4081-3-7-e-ee64700b2a" Apr 13 19:23:40.031182 kubelet[2197]: E0413 19:23:40.031140 2197 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-7-e-ee64700b2a\" not found" node="ci-4081-3-7-e-ee64700b2a" Apr 13 19:23:40.038609 kubelet[2197]: E0413 19:23:40.038554 2197 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-7-e-ee64700b2a\" not found" node="ci-4081-3-7-e-ee64700b2a" Apr 13 19:23:40.040906 kubelet[2197]: E0413 19:23:40.040665 2197 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-7-e-ee64700b2a\" not found" node="ci-4081-3-7-e-ee64700b2a" Apr 13 19:23:41.042727 kubelet[2197]: E0413 19:23:41.042280 2197 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-7-e-ee64700b2a\" not found" node="ci-4081-3-7-e-ee64700b2a" Apr 13 19:23:41.043194 kubelet[2197]: E0413 19:23:41.042774 2197 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-7-e-ee64700b2a\" not found" node="ci-4081-3-7-e-ee64700b2a" Apr 13 19:23:41.622696 kubelet[2197]: E0413 19:23:41.622641 2197 nodelease.go:50] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081-3-7-e-ee64700b2a\" not found" node="ci-4081-3-7-e-ee64700b2a" Apr 13 19:23:41.867857 kubelet[2197]: I0413 19:23:41.867609 2197 kubelet_node_status.go:77] "Successfully registered node" node="ci-4081-3-7-e-ee64700b2a" Apr 13 19:23:41.871754 kubelet[2197]: I0413 19:23:41.871677 2197 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-7-e-ee64700b2a" Apr 13 19:23:41.887572 kubelet[2197]: E0413 19:23:41.887261 2197 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081-3-7-e-ee64700b2a\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4081-3-7-e-ee64700b2a" Apr 13 19:23:41.887572 kubelet[2197]: I0413 19:23:41.887302 2197 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-7-e-ee64700b2a" Apr 13 19:23:41.896762 kubelet[2197]: E0413 19:23:41.895950 2197 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081-3-7-e-ee64700b2a\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4081-3-7-e-ee64700b2a" Apr 13 19:23:41.896762 kubelet[2197]: I0413 19:23:41.895982 2197 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081-3-7-e-ee64700b2a" Apr 13 19:23:41.907897 kubelet[2197]: E0413 19:23:41.907828 2197 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081-3-7-e-ee64700b2a\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4081-3-7-e-ee64700b2a" Apr 13 19:23:41.946421 kubelet[2197]: I0413 19:23:41.946177 2197 apiserver.go:52] "Watching apiserver" Apr 13 19:23:41.979332 kubelet[2197]: I0413 19:23:41.979287 2197 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Apr 13 19:23:42.081308 kubelet[2197]: I0413 19:23:42.081066 2197 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-7-e-ee64700b2a" Apr 13 19:23:42.087218 kubelet[2197]: E0413 19:23:42.086940 2197 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081-3-7-e-ee64700b2a\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4081-3-7-e-ee64700b2a" Apr 13 19:23:43.311420 kubelet[2197]: I0413 19:23:43.311343 2197 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081-3-7-e-ee64700b2a" Apr 13 19:23:43.792405 systemd[1]: Reloading requested from client PID 2492 ('systemctl') (unit session-7.scope)... Apr 13 19:23:43.792766 systemd[1]: Reloading... Apr 13 19:23:43.907471 zram_generator::config[2535]: No configuration found. Apr 13 19:23:44.016812 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 13 19:23:44.105596 systemd[1]: Reloading finished in 312 ms. Apr 13 19:23:44.152885 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 19:23:44.168524 systemd[1]: kubelet.service: Deactivated successfully. Apr 13 19:23:44.168914 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 19:23:44.168996 systemd[1]: kubelet.service: Consumed 1.093s CPU time, 121.8M memory peak, 0B memory swap peak. Apr 13 19:23:44.180941 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 19:23:44.306747 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 19:23:44.319985 (kubelet)[2577]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 13 19:23:44.368516 kubelet[2577]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 13 19:23:44.384995 kubelet[2577]: I0413 19:23:44.384519 2577 server.go:525] "Kubelet version" kubeletVersion="v1.35.1" Apr 13 19:23:44.385234 kubelet[2577]: I0413 19:23:44.385216 2577 server.go:527] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 13 19:23:44.385330 kubelet[2577]: I0413 19:23:44.385321 2577 watchdog_linux.go:95] "Systemd watchdog is not enabled" Apr 13 19:23:44.385405 kubelet[2577]: I0413 19:23:44.385394 2577 watchdog_linux.go:138] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 13 19:23:44.386514 kubelet[2577]: I0413 19:23:44.386478 2577 server.go:951] "Client rotation is on, will bootstrap in background" Apr 13 19:23:44.388483 kubelet[2577]: I0413 19:23:44.388406 2577 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Apr 13 19:23:44.392291 kubelet[2577]: I0413 19:23:44.391725 2577 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 13 19:23:44.398444 kubelet[2577]: E0413 19:23:44.397737 2577 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 13 19:23:44.398444 kubelet[2577]: I0413 19:23:44.397802 2577 server.go:1395] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Apr 13 19:23:44.400940 kubelet[2577]: I0413 19:23:44.400915 2577 server.go:775] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Apr 13 19:23:44.401340 kubelet[2577]: I0413 19:23:44.401305 2577 container_manager_linux.go:272] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 13 19:23:44.401685 kubelet[2577]: I0413 19:23:44.401502 2577 container_manager_linux.go:277] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-7-e-ee64700b2a","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 13 19:23:44.401811 kubelet[2577]: I0413 19:23:44.401799 2577 topology_manager.go:143] "Creating topology manager with none policy" Apr 13 19:23:44.401863 kubelet[2577]: I0413 19:23:44.401855 2577 container_manager_linux.go:308] "Creating device plugin manager" Apr 13 19:23:44.401931 kubelet[2577]: I0413 19:23:44.401921 2577 container_manager_linux.go:317] "Creating Dynamic Resource Allocation (DRA) manager" Apr 13 19:23:44.402234 kubelet[2577]: I0413 19:23:44.402219 2577 state_mem.go:41] "Initialized" logger="CPUManager state memory" Apr 13 19:23:44.402536 kubelet[2577]: I0413 19:23:44.402513 2577 kubelet.go:482] "Attempting to sync node with API server" Apr 13 19:23:44.402623 kubelet[2577]: I0413 19:23:44.402613 2577 kubelet.go:383] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 13 19:23:44.402682 kubelet[2577]: I0413 19:23:44.402674 2577 kubelet.go:394] "Adding apiserver pod source" Apr 13 19:23:44.402734 kubelet[2577]: I0413 19:23:44.402726 2577 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 13 19:23:44.407587 kubelet[2577]: I0413 19:23:44.407532 2577 kuberuntime_manager.go:294] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 13 19:23:44.409995 kubelet[2577]: I0413 19:23:44.409830 2577 kubelet.go:943] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 13 19:23:44.409995 kubelet[2577]: I0413 19:23:44.409884 2577 kubelet.go:970] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Apr 13 19:23:44.421112 kubelet[2577]: I0413 19:23:44.419474 2577 server.go:1257] "Started kubelet" Apr 13 19:23:44.423263 kubelet[2577]: I0413 19:23:44.423225 2577 fs_resource_analyzer.go:69] "Starting FS ResourceAnalyzer" Apr 13 19:23:44.428844 kubelet[2577]: I0413 19:23:44.428753 2577 server.go:182] "Starting to listen" address="0.0.0.0" port=10250 Apr 13 19:23:44.431164 kubelet[2577]: I0413 19:23:44.431109 2577 server.go:317] "Adding debug handlers to kubelet server" Apr 13 19:23:44.442100 kubelet[2577]: I0413 19:23:44.442011 2577 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 13 19:23:44.442347 kubelet[2577]: I0413 19:23:44.442122 2577 server_v1.go:49] "podresources" method="list" useActivePods=true Apr 13 19:23:44.442517 kubelet[2577]: I0413 19:23:44.442493 2577 server.go:254] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 13 19:23:44.444380 kubelet[2577]: I0413 19:23:44.444310 2577 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 13 19:23:44.445587 kubelet[2577]: I0413 19:23:44.445552 2577 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Apr 13 19:23:44.450797 kubelet[2577]: I0413 19:23:44.449580 2577 factory.go:223] Registration of the systemd container factory successfully Apr 13 19:23:44.451593 kubelet[2577]: I0413 19:23:44.451555 2577 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 13 19:23:44.455707 kubelet[2577]: I0413 19:23:44.451078 2577 volume_manager.go:311] "Starting Kubelet Volume Manager" Apr 13 19:23:44.455939 kubelet[2577]: I0413 19:23:44.451095 2577 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Apr 13 19:23:44.459658 kubelet[2577]: E0413 19:23:44.451124 2577 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"ci-4081-3-7-e-ee64700b2a\" not found" Apr 13 19:23:44.459937 kubelet[2577]: I0413 19:23:44.459922 2577 reconciler.go:29] "Reconciler: start to sync state" Apr 13 19:23:44.468576 kubelet[2577]: I0413 19:23:44.467231 2577 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Apr 13 19:23:44.468777 kubelet[2577]: I0413 19:23:44.468759 2577 status_manager.go:249] "Starting to sync pod status with apiserver" Apr 13 19:23:44.468849 kubelet[2577]: I0413 19:23:44.468841 2577 kubelet.go:2501] "Starting kubelet main sync loop" Apr 13 19:23:44.469006 kubelet[2577]: E0413 19:23:44.468984 2577 kubelet.go:2525] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 13 19:23:44.470078 kubelet[2577]: I0413 19:23:44.470044 2577 factory.go:223] Registration of the containerd container factory successfully Apr 13 19:23:44.537339 kubelet[2577]: I0413 19:23:44.537311 2577 cpu_manager.go:225] "Starting" policy="none" Apr 13 19:23:44.537544 kubelet[2577]: I0413 19:23:44.537527 2577 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Apr 13 19:23:44.537605 kubelet[2577]: I0413 19:23:44.537595 2577 state_mem.go:41] "Initialized" logger="CPUManager state checkpoint.CPUManager state memory" Apr 13 19:23:44.538635 kubelet[2577]: I0413 19:23:44.538608 2577 state_mem.go:94] "Updated default CPUSet" logger="CPUManager state checkpoint.CPUManager state memory" cpuSet="" Apr 13 19:23:44.539432 kubelet[2577]: I0413 19:23:44.538748 2577 state_mem.go:102] "Updated CPUSet assignments" logger="CPUManager state checkpoint.CPUManager state memory" assignments={} Apr 13 19:23:44.539432 kubelet[2577]: I0413 19:23:44.538778 2577 policy_none.go:50] "Start" Apr 13 19:23:44.539432 kubelet[2577]: I0413 19:23:44.538789 2577 memory_manager.go:187] "Starting memorymanager" policy="None" Apr 13 19:23:44.539432 kubelet[2577]: I0413 19:23:44.538802 2577 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Apr 13 19:23:44.539432 kubelet[2577]: I0413 19:23:44.538916 2577 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Apr 13 19:23:44.539432 kubelet[2577]: I0413 19:23:44.538930 2577 policy_none.go:44] "Start" Apr 13 19:23:44.544397 kubelet[2577]: E0413 19:23:44.544281 2577 manager.go:525] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 13 19:23:44.544796 kubelet[2577]: I0413 19:23:44.544772 2577 eviction_manager.go:194] "Eviction manager: starting control loop" Apr 13 19:23:44.544938 kubelet[2577]: I0413 19:23:44.544903 2577 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 13 19:23:44.547108 kubelet[2577]: I0413 19:23:44.547072 2577 plugin_manager.go:121] "Starting Kubelet Plugin Manager" Apr 13 19:23:44.552187 kubelet[2577]: E0413 19:23:44.551832 2577 eviction_manager.go:272] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 13 19:23:44.572515 kubelet[2577]: I0413 19:23:44.570904 2577 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-7-e-ee64700b2a" Apr 13 19:23:44.572515 kubelet[2577]: I0413 19:23:44.571399 2577 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081-3-7-e-ee64700b2a" Apr 13 19:23:44.572515 kubelet[2577]: I0413 19:23:44.571487 2577 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-7-e-ee64700b2a" Apr 13 19:23:44.585696 kubelet[2577]: E0413 19:23:44.585612 2577 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081-3-7-e-ee64700b2a\" already exists" pod="kube-system/kube-controller-manager-ci-4081-3-7-e-ee64700b2a" Apr 13 19:23:44.651595 kubelet[2577]: I0413 19:23:44.651497 2577 kubelet_node_status.go:74] "Attempting to register node" node="ci-4081-3-7-e-ee64700b2a" Apr 13 19:23:44.660909 kubelet[2577]: I0413 19:23:44.660845 2577 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1c9ce81089f870804dbc347e665d74e3-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-7-e-ee64700b2a\" (UID: \"1c9ce81089f870804dbc347e665d74e3\") " pod="kube-system/kube-controller-manager-ci-4081-3-7-e-ee64700b2a" Apr 13 19:23:44.660909 kubelet[2577]: I0413 19:23:44.660919 2577 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/752bf0cb5685baafe8db7935b18bfb91-kubeconfig\") pod \"kube-scheduler-ci-4081-3-7-e-ee64700b2a\" (UID: \"752bf0cb5685baafe8db7935b18bfb91\") " pod="kube-system/kube-scheduler-ci-4081-3-7-e-ee64700b2a" Apr 13 19:23:44.661102 kubelet[2577]: I0413 19:23:44.660942 2577 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1e8e46e5c2fd04709c94e35be0a3c28f-ca-certs\") pod \"kube-apiserver-ci-4081-3-7-e-ee64700b2a\" (UID: \"1e8e46e5c2fd04709c94e35be0a3c28f\") " pod="kube-system/kube-apiserver-ci-4081-3-7-e-ee64700b2a" Apr 13 19:23:44.661102 kubelet[2577]: I0413 19:23:44.660963 2577 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1c9ce81089f870804dbc347e665d74e3-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-7-e-ee64700b2a\" (UID: \"1c9ce81089f870804dbc347e665d74e3\") " pod="kube-system/kube-controller-manager-ci-4081-3-7-e-ee64700b2a" Apr 13 19:23:44.661102 kubelet[2577]: I0413 19:23:44.660983 2577 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1c9ce81089f870804dbc347e665d74e3-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-7-e-ee64700b2a\" (UID: \"1c9ce81089f870804dbc347e665d74e3\") " pod="kube-system/kube-controller-manager-ci-4081-3-7-e-ee64700b2a" Apr 13 19:23:44.661102 kubelet[2577]: I0413 19:23:44.661003 2577 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1e8e46e5c2fd04709c94e35be0a3c28f-k8s-certs\") pod \"kube-apiserver-ci-4081-3-7-e-ee64700b2a\" (UID: \"1e8e46e5c2fd04709c94e35be0a3c28f\") " pod="kube-system/kube-apiserver-ci-4081-3-7-e-ee64700b2a" Apr 13 19:23:44.661102 kubelet[2577]: I0413 19:23:44.661017 2577 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1e8e46e5c2fd04709c94e35be0a3c28f-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-7-e-ee64700b2a\" (UID: \"1e8e46e5c2fd04709c94e35be0a3c28f\") " pod="kube-system/kube-apiserver-ci-4081-3-7-e-ee64700b2a" Apr 13 19:23:44.661225 kubelet[2577]: I0413 19:23:44.661031 2577 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1c9ce81089f870804dbc347e665d74e3-ca-certs\") pod \"kube-controller-manager-ci-4081-3-7-e-ee64700b2a\" (UID: \"1c9ce81089f870804dbc347e665d74e3\") " pod="kube-system/kube-controller-manager-ci-4081-3-7-e-ee64700b2a" Apr 13 19:23:44.661225 kubelet[2577]: I0413 19:23:44.661050 2577 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/1c9ce81089f870804dbc347e665d74e3-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-7-e-ee64700b2a\" (UID: \"1c9ce81089f870804dbc347e665d74e3\") " pod="kube-system/kube-controller-manager-ci-4081-3-7-e-ee64700b2a" Apr 13 19:23:44.667864 kubelet[2577]: I0413 19:23:44.667811 2577 kubelet_node_status.go:123] "Node was previously registered" node="ci-4081-3-7-e-ee64700b2a" Apr 13 19:23:44.668044 kubelet[2577]: I0413 19:23:44.667913 2577 kubelet_node_status.go:77] "Successfully registered node" node="ci-4081-3-7-e-ee64700b2a" Apr 13 19:23:45.405635 kubelet[2577]: I0413 19:23:45.405557 2577 apiserver.go:52] "Watching apiserver" Apr 13 19:23:45.456275 kubelet[2577]: I0413 19:23:45.456202 2577 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Apr 13 19:23:45.513534 kubelet[2577]: I0413 19:23:45.511690 2577 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-7-e-ee64700b2a" Apr 13 19:23:45.522712 kubelet[2577]: E0413 19:23:45.522615 2577 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081-3-7-e-ee64700b2a\" already exists" pod="kube-system/kube-apiserver-ci-4081-3-7-e-ee64700b2a" Apr 13 19:23:45.556019 kubelet[2577]: I0413 19:23:45.555929 2577 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081-3-7-e-ee64700b2a" podStartSLOduration=1.555910965 podStartE2EDuration="1.555910965s" podCreationTimestamp="2026-04-13 19:23:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-13 19:23:45.544941245 +0000 UTC m=+1.218424681" watchObservedRunningTime="2026-04-13 19:23:45.555910965 +0000 UTC m=+1.229394401" Apr 13 19:23:45.570842 kubelet[2577]: I0413 19:23:45.570614 2577 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081-3-7-e-ee64700b2a" podStartSLOduration=1.5705845250000001 podStartE2EDuration="1.570584525s" podCreationTimestamp="2026-04-13 19:23:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-13 19:23:45.557291205 +0000 UTC m=+1.230774641" watchObservedRunningTime="2026-04-13 19:23:45.570584525 +0000 UTC m=+1.244067961" Apr 13 19:23:45.833869 kubelet[2577]: I0413 19:23:45.833546 2577 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081-3-7-e-ee64700b2a" podStartSLOduration=2.833528245 podStartE2EDuration="2.833528245s" podCreationTimestamp="2026-04-13 19:23:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-13 19:23:45.572137845 +0000 UTC m=+1.245621281" watchObservedRunningTime="2026-04-13 19:23:45.833528245 +0000 UTC m=+1.507011681" Apr 13 19:23:49.275728 kubelet[2577]: I0413 19:23:49.275691 2577 kuberuntime_manager.go:2062] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Apr 13 19:23:49.278252 containerd[1480]: time="2026-04-13T19:23:49.277008165Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 13 19:23:49.278669 kubelet[2577]: I0413 19:23:49.277589 2577 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Apr 13 19:23:50.440001 systemd[1]: Created slice kubepods-besteffort-podaa813b56_d77f_405c_afbb_1fba72c287a3.slice - libcontainer container kubepods-besteffort-podaa813b56_d77f_405c_afbb_1fba72c287a3.slice. Apr 13 19:23:50.494650 kubelet[2577]: I0413 19:23:50.494531 2577 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/aa813b56-d77f-405c-afbb-1fba72c287a3-kube-proxy\") pod \"kube-proxy-vmzsw\" (UID: \"aa813b56-d77f-405c-afbb-1fba72c287a3\") " pod="kube-system/kube-proxy-vmzsw" Apr 13 19:23:50.495602 kubelet[2577]: I0413 19:23:50.494663 2577 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6xt5b\" (UniqueName: \"kubernetes.io/projected/aa813b56-d77f-405c-afbb-1fba72c287a3-kube-api-access-6xt5b\") pod \"kube-proxy-vmzsw\" (UID: \"aa813b56-d77f-405c-afbb-1fba72c287a3\") " pod="kube-system/kube-proxy-vmzsw" Apr 13 19:23:50.495602 kubelet[2577]: I0413 19:23:50.494777 2577 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/aa813b56-d77f-405c-afbb-1fba72c287a3-xtables-lock\") pod \"kube-proxy-vmzsw\" (UID: \"aa813b56-d77f-405c-afbb-1fba72c287a3\") " pod="kube-system/kube-proxy-vmzsw" Apr 13 19:23:50.495602 kubelet[2577]: I0413 19:23:50.494850 2577 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/aa813b56-d77f-405c-afbb-1fba72c287a3-lib-modules\") pod \"kube-proxy-vmzsw\" (UID: \"aa813b56-d77f-405c-afbb-1fba72c287a3\") " pod="kube-system/kube-proxy-vmzsw" Apr 13 19:23:50.632164 systemd[1]: Created slice kubepods-besteffort-pod0bcd9e89_39ed_4fec_aaca_470c68ba13ae.slice - libcontainer container kubepods-besteffort-pod0bcd9e89_39ed_4fec_aaca_470c68ba13ae.slice. Apr 13 19:23:50.696626 kubelet[2577]: I0413 19:23:50.696179 2577 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8ldjv\" (UniqueName: \"kubernetes.io/projected/0bcd9e89-39ed-4fec-aaca-470c68ba13ae-kube-api-access-8ldjv\") pod \"tigera-operator-6cf4cccc57-c5gnf\" (UID: \"0bcd9e89-39ed-4fec-aaca-470c68ba13ae\") " pod="tigera-operator/tigera-operator-6cf4cccc57-c5gnf" Apr 13 19:23:50.696626 kubelet[2577]: I0413 19:23:50.696254 2577 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/0bcd9e89-39ed-4fec-aaca-470c68ba13ae-var-lib-calico\") pod \"tigera-operator-6cf4cccc57-c5gnf\" (UID: \"0bcd9e89-39ed-4fec-aaca-470c68ba13ae\") " pod="tigera-operator/tigera-operator-6cf4cccc57-c5gnf" Apr 13 19:23:50.754894 containerd[1480]: time="2026-04-13T19:23:50.754825845Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-vmzsw,Uid:aa813b56-d77f-405c-afbb-1fba72c287a3,Namespace:kube-system,Attempt:0,}" Apr 13 19:23:50.784049 containerd[1480]: time="2026-04-13T19:23:50.783747925Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 19:23:50.784049 containerd[1480]: time="2026-04-13T19:23:50.783805925Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 19:23:50.784049 containerd[1480]: time="2026-04-13T19:23:50.783819925Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:23:50.784049 containerd[1480]: time="2026-04-13T19:23:50.783904645Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:23:50.807837 systemd[1]: Started cri-containerd-2d9abfc1030897df33fc85dfc7f325249c94639f63dc2659e014251ef1c459b8.scope - libcontainer container 2d9abfc1030897df33fc85dfc7f325249c94639f63dc2659e014251ef1c459b8. Apr 13 19:23:50.841947 containerd[1480]: time="2026-04-13T19:23:50.841906925Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-vmzsw,Uid:aa813b56-d77f-405c-afbb-1fba72c287a3,Namespace:kube-system,Attempt:0,} returns sandbox id \"2d9abfc1030897df33fc85dfc7f325249c94639f63dc2659e014251ef1c459b8\"" Apr 13 19:23:50.848991 containerd[1480]: time="2026-04-13T19:23:50.848924205Z" level=info msg="CreateContainer within sandbox \"2d9abfc1030897df33fc85dfc7f325249c94639f63dc2659e014251ef1c459b8\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Apr 13 19:23:50.866140 containerd[1480]: time="2026-04-13T19:23:50.865995845Z" level=info msg="CreateContainer within sandbox \"2d9abfc1030897df33fc85dfc7f325249c94639f63dc2659e014251ef1c459b8\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"1a7cb396341bea6c7f2358bb29b52a53efb19778c19da13d5321a3c1e7a15adc\"" Apr 13 19:23:50.866700 containerd[1480]: time="2026-04-13T19:23:50.866672085Z" level=info msg="StartContainer for \"1a7cb396341bea6c7f2358bb29b52a53efb19778c19da13d5321a3c1e7a15adc\"" Apr 13 19:23:50.896669 systemd[1]: Started cri-containerd-1a7cb396341bea6c7f2358bb29b52a53efb19778c19da13d5321a3c1e7a15adc.scope - libcontainer container 1a7cb396341bea6c7f2358bb29b52a53efb19778c19da13d5321a3c1e7a15adc. Apr 13 19:23:50.927802 containerd[1480]: time="2026-04-13T19:23:50.927745285Z" level=info msg="StartContainer for \"1a7cb396341bea6c7f2358bb29b52a53efb19778c19da13d5321a3c1e7a15adc\" returns successfully" Apr 13 19:23:50.947135 containerd[1480]: time="2026-04-13T19:23:50.946948805Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6cf4cccc57-c5gnf,Uid:0bcd9e89-39ed-4fec-aaca-470c68ba13ae,Namespace:tigera-operator,Attempt:0,}" Apr 13 19:23:50.976908 containerd[1480]: time="2026-04-13T19:23:50.975932645Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 19:23:50.976908 containerd[1480]: time="2026-04-13T19:23:50.976014365Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 19:23:50.976908 containerd[1480]: time="2026-04-13T19:23:50.976047605Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:23:50.976908 containerd[1480]: time="2026-04-13T19:23:50.976147645Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:23:51.000661 systemd[1]: Started cri-containerd-89cdb0e0d16ea2215ddbbc8336711fb0d5b98860ec61779ab7a07065aaf0dbb8.scope - libcontainer container 89cdb0e0d16ea2215ddbbc8336711fb0d5b98860ec61779ab7a07065aaf0dbb8. Apr 13 19:23:51.046808 containerd[1480]: time="2026-04-13T19:23:51.046763645Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6cf4cccc57-c5gnf,Uid:0bcd9e89-39ed-4fec-aaca-470c68ba13ae,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"89cdb0e0d16ea2215ddbbc8336711fb0d5b98860ec61779ab7a07065aaf0dbb8\"" Apr 13 19:23:51.052118 containerd[1480]: time="2026-04-13T19:23:51.051788565Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\"" Apr 13 19:23:52.624944 kubelet[2577]: I0413 19:23:52.624554 2577 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-proxy-vmzsw" podStartSLOduration=2.624531685 podStartE2EDuration="2.624531685s" podCreationTimestamp="2026-04-13 19:23:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-13 19:23:51.542084245 +0000 UTC m=+7.215567681" watchObservedRunningTime="2026-04-13 19:23:52.624531685 +0000 UTC m=+8.298015161" Apr 13 19:23:52.974980 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3631021846.mount: Deactivated successfully. Apr 13 19:23:54.427440 containerd[1480]: time="2026-04-13T19:23:54.426705485Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.40.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:23:54.428279 containerd[1480]: time="2026-04-13T19:23:54.428212805Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.40.7: active requests=0, bytes read=25071565" Apr 13 19:23:54.430220 containerd[1480]: time="2026-04-13T19:23:54.429972325Z" level=info msg="ImageCreate event name:\"sha256:b2fef69c2456aa0a6f6dcb63425a69d11dc35a73b1883b250e4d92f5a697fefe\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:23:54.434454 containerd[1480]: time="2026-04-13T19:23:54.434213765Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.40.7\" with image id \"sha256:b2fef69c2456aa0a6f6dcb63425a69d11dc35a73b1883b250e4d92f5a697fefe\", repo tag \"quay.io/tigera/operator:v1.40.7\", repo digest \"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\", size \"25067560\" in 3.38237672s" Apr 13 19:23:54.434454 containerd[1480]: time="2026-04-13T19:23:54.434266445Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\" returns image reference \"sha256:b2fef69c2456aa0a6f6dcb63425a69d11dc35a73b1883b250e4d92f5a697fefe\"" Apr 13 19:23:54.435662 containerd[1480]: time="2026-04-13T19:23:54.435583965Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:23:54.442276 containerd[1480]: time="2026-04-13T19:23:54.442125325Z" level=info msg="CreateContainer within sandbox \"89cdb0e0d16ea2215ddbbc8336711fb0d5b98860ec61779ab7a07065aaf0dbb8\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Apr 13 19:23:54.457782 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3478044926.mount: Deactivated successfully. Apr 13 19:23:54.459680 containerd[1480]: time="2026-04-13T19:23:54.459635365Z" level=info msg="CreateContainer within sandbox \"89cdb0e0d16ea2215ddbbc8336711fb0d5b98860ec61779ab7a07065aaf0dbb8\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"90fee36026d85233093cb3d25136bee51295db821c11b7fcb958fd11cdd92ba6\"" Apr 13 19:23:54.460496 containerd[1480]: time="2026-04-13T19:23:54.460454765Z" level=info msg="StartContainer for \"90fee36026d85233093cb3d25136bee51295db821c11b7fcb958fd11cdd92ba6\"" Apr 13 19:23:54.491709 systemd[1]: run-containerd-runc-k8s.io-90fee36026d85233093cb3d25136bee51295db821c11b7fcb958fd11cdd92ba6-runc.ehLTOx.mount: Deactivated successfully. Apr 13 19:23:54.500770 systemd[1]: Started cri-containerd-90fee36026d85233093cb3d25136bee51295db821c11b7fcb958fd11cdd92ba6.scope - libcontainer container 90fee36026d85233093cb3d25136bee51295db821c11b7fcb958fd11cdd92ba6. Apr 13 19:23:54.537827 containerd[1480]: time="2026-04-13T19:23:54.537722805Z" level=info msg="StartContainer for \"90fee36026d85233093cb3d25136bee51295db821c11b7fcb958fd11cdd92ba6\" returns successfully" Apr 13 19:23:55.558267 kubelet[2577]: I0413 19:23:55.558059 2577 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="tigera-operator/tigera-operator-6cf4cccc57-c5gnf" podStartSLOduration=2.173245485 podStartE2EDuration="5.558042485s" podCreationTimestamp="2026-04-13 19:23:50 +0000 UTC" firstStartedPulling="2026-04-13 19:23:51.051214645 +0000 UTC m=+6.724698081" lastFinishedPulling="2026-04-13 19:23:54.436011645 +0000 UTC m=+10.109495081" observedRunningTime="2026-04-13 19:23:55.557876725 +0000 UTC m=+11.231360201" watchObservedRunningTime="2026-04-13 19:23:55.558042485 +0000 UTC m=+11.231525961" Apr 13 19:24:00.924535 sudo[1700]: pam_unix(sudo:session): session closed for user root Apr 13 19:24:00.942770 sshd[1697]: pam_unix(sshd:session): session closed for user core Apr 13 19:24:00.946993 systemd[1]: sshd@6-49.13.49.84:22-50.85.169.122:41898.service: Deactivated successfully. Apr 13 19:24:00.951893 systemd[1]: session-7.scope: Deactivated successfully. Apr 13 19:24:00.952099 systemd[1]: session-7.scope: Consumed 5.456s CPU time, 152.9M memory peak, 0B memory swap peak. Apr 13 19:24:00.954196 systemd-logind[1458]: Session 7 logged out. Waiting for processes to exit. Apr 13 19:24:00.955592 systemd-logind[1458]: Removed session 7. Apr 13 19:24:07.397152 systemd[1]: Created slice kubepods-besteffort-pod2570c5bd_7155_4c40_9ab4_48d263d3ec31.slice - libcontainer container kubepods-besteffort-pod2570c5bd_7155_4c40_9ab4_48d263d3ec31.slice. Apr 13 19:24:07.410907 kubelet[2577]: I0413 19:24:07.410734 2577 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/2570c5bd-7155-4c40-9ab4-48d263d3ec31-typha-certs\") pod \"calico-typha-6f5c785988-xl8vl\" (UID: \"2570c5bd-7155-4c40-9ab4-48d263d3ec31\") " pod="calico-system/calico-typha-6f5c785988-xl8vl" Apr 13 19:24:07.410907 kubelet[2577]: I0413 19:24:07.410780 2577 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mk75h\" (UniqueName: \"kubernetes.io/projected/2570c5bd-7155-4c40-9ab4-48d263d3ec31-kube-api-access-mk75h\") pod \"calico-typha-6f5c785988-xl8vl\" (UID: \"2570c5bd-7155-4c40-9ab4-48d263d3ec31\") " pod="calico-system/calico-typha-6f5c785988-xl8vl" Apr 13 19:24:07.410907 kubelet[2577]: I0413 19:24:07.410798 2577 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2570c5bd-7155-4c40-9ab4-48d263d3ec31-tigera-ca-bundle\") pod \"calico-typha-6f5c785988-xl8vl\" (UID: \"2570c5bd-7155-4c40-9ab4-48d263d3ec31\") " pod="calico-system/calico-typha-6f5c785988-xl8vl" Apr 13 19:24:07.506467 systemd[1]: Created slice kubepods-besteffort-pod1a89a649_3173_4927_bfd1_b7b666459310.slice - libcontainer container kubepods-besteffort-pod1a89a649_3173_4927_bfd1_b7b666459310.slice. Apr 13 19:24:07.603172 kubelet[2577]: E0413 19:24:07.603104 2577 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-b2r6n" podUID="b7e65d94-d910-4098-accc-faa807a02ba1" Apr 13 19:24:07.612266 kubelet[2577]: I0413 19:24:07.612179 2577 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/1a89a649-3173-4927-bfd1-b7b666459310-cni-log-dir\") pod \"calico-node-mb6t5\" (UID: \"1a89a649-3173-4927-bfd1-b7b666459310\") " pod="calico-system/calico-node-mb6t5" Apr 13 19:24:07.612266 kubelet[2577]: I0413 19:24:07.612229 2577 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1a89a649-3173-4927-bfd1-b7b666459310-xtables-lock\") pod \"calico-node-mb6t5\" (UID: \"1a89a649-3173-4927-bfd1-b7b666459310\") " pod="calico-system/calico-node-mb6t5" Apr 13 19:24:07.612266 kubelet[2577]: I0413 19:24:07.612251 2577 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bps9w\" (UniqueName: \"kubernetes.io/projected/1a89a649-3173-4927-bfd1-b7b666459310-kube-api-access-bps9w\") pod \"calico-node-mb6t5\" (UID: \"1a89a649-3173-4927-bfd1-b7b666459310\") " pod="calico-system/calico-node-mb6t5" Apr 13 19:24:07.612266 kubelet[2577]: I0413 19:24:07.612270 2577 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/1a89a649-3173-4927-bfd1-b7b666459310-cni-bin-dir\") pod \"calico-node-mb6t5\" (UID: \"1a89a649-3173-4927-bfd1-b7b666459310\") " pod="calico-system/calico-node-mb6t5" Apr 13 19:24:07.613570 kubelet[2577]: I0413 19:24:07.612289 2577 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/1a89a649-3173-4927-bfd1-b7b666459310-cni-net-dir\") pod \"calico-node-mb6t5\" (UID: \"1a89a649-3173-4927-bfd1-b7b666459310\") " pod="calico-system/calico-node-mb6t5" Apr 13 19:24:07.613570 kubelet[2577]: I0413 19:24:07.612302 2577 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1a89a649-3173-4927-bfd1-b7b666459310-lib-modules\") pod \"calico-node-mb6t5\" (UID: \"1a89a649-3173-4927-bfd1-b7b666459310\") " pod="calico-system/calico-node-mb6t5" Apr 13 19:24:07.613570 kubelet[2577]: I0413 19:24:07.612316 2577 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1a89a649-3173-4927-bfd1-b7b666459310-tigera-ca-bundle\") pod \"calico-node-mb6t5\" (UID: \"1a89a649-3173-4927-bfd1-b7b666459310\") " pod="calico-system/calico-node-mb6t5" Apr 13 19:24:07.613570 kubelet[2577]: I0413 19:24:07.612332 2577 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/1a89a649-3173-4927-bfd1-b7b666459310-var-run-calico\") pod \"calico-node-mb6t5\" (UID: \"1a89a649-3173-4927-bfd1-b7b666459310\") " pod="calico-system/calico-node-mb6t5" Apr 13 19:24:07.613570 kubelet[2577]: I0413 19:24:07.612348 2577 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/1a89a649-3173-4927-bfd1-b7b666459310-flexvol-driver-host\") pod \"calico-node-mb6t5\" (UID: \"1a89a649-3173-4927-bfd1-b7b666459310\") " pod="calico-system/calico-node-mb6t5" Apr 13 19:24:07.613697 kubelet[2577]: I0413 19:24:07.612366 2577 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/1a89a649-3173-4927-bfd1-b7b666459310-policysync\") pod \"calico-node-mb6t5\" (UID: \"1a89a649-3173-4927-bfd1-b7b666459310\") " pod="calico-system/calico-node-mb6t5" Apr 13 19:24:07.613697 kubelet[2577]: I0413 19:24:07.612383 2577 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/1a89a649-3173-4927-bfd1-b7b666459310-bpffs\") pod \"calico-node-mb6t5\" (UID: \"1a89a649-3173-4927-bfd1-b7b666459310\") " pod="calico-system/calico-node-mb6t5" Apr 13 19:24:07.613697 kubelet[2577]: I0413 19:24:07.612405 2577 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/1a89a649-3173-4927-bfd1-b7b666459310-node-certs\") pod \"calico-node-mb6t5\" (UID: \"1a89a649-3173-4927-bfd1-b7b666459310\") " pod="calico-system/calico-node-mb6t5" Apr 13 19:24:07.613697 kubelet[2577]: I0413 19:24:07.612435 2577 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nodeproc\" (UniqueName: \"kubernetes.io/host-path/1a89a649-3173-4927-bfd1-b7b666459310-nodeproc\") pod \"calico-node-mb6t5\" (UID: \"1a89a649-3173-4927-bfd1-b7b666459310\") " pod="calico-system/calico-node-mb6t5" Apr 13 19:24:07.613697 kubelet[2577]: I0413 19:24:07.612450 2577 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/1a89a649-3173-4927-bfd1-b7b666459310-sys-fs\") pod \"calico-node-mb6t5\" (UID: \"1a89a649-3173-4927-bfd1-b7b666459310\") " pod="calico-system/calico-node-mb6t5" Apr 13 19:24:07.613697 kubelet[2577]: I0413 19:24:07.612474 2577 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/1a89a649-3173-4927-bfd1-b7b666459310-var-lib-calico\") pod \"calico-node-mb6t5\" (UID: \"1a89a649-3173-4927-bfd1-b7b666459310\") " pod="calico-system/calico-node-mb6t5" Apr 13 19:24:07.706073 containerd[1480]: time="2026-04-13T19:24:07.705885357Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6f5c785988-xl8vl,Uid:2570c5bd-7155-4c40-9ab4-48d263d3ec31,Namespace:calico-system,Attempt:0,}" Apr 13 19:24:07.713268 kubelet[2577]: I0413 19:24:07.713215 2577 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/b7e65d94-d910-4098-accc-faa807a02ba1-varrun\") pod \"csi-node-driver-b2r6n\" (UID: \"b7e65d94-d910-4098-accc-faa807a02ba1\") " pod="calico-system/csi-node-driver-b2r6n" Apr 13 19:24:07.713445 kubelet[2577]: I0413 19:24:07.713314 2577 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bchlt\" (UniqueName: \"kubernetes.io/projected/b7e65d94-d910-4098-accc-faa807a02ba1-kube-api-access-bchlt\") pod \"csi-node-driver-b2r6n\" (UID: \"b7e65d94-d910-4098-accc-faa807a02ba1\") " pod="calico-system/csi-node-driver-b2r6n" Apr 13 19:24:07.713445 kubelet[2577]: I0413 19:24:07.713401 2577 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b7e65d94-d910-4098-accc-faa807a02ba1-kubelet-dir\") pod \"csi-node-driver-b2r6n\" (UID: \"b7e65d94-d910-4098-accc-faa807a02ba1\") " pod="calico-system/csi-node-driver-b2r6n" Apr 13 19:24:07.713445 kubelet[2577]: I0413 19:24:07.713435 2577 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/b7e65d94-d910-4098-accc-faa807a02ba1-registration-dir\") pod \"csi-node-driver-b2r6n\" (UID: \"b7e65d94-d910-4098-accc-faa807a02ba1\") " pod="calico-system/csi-node-driver-b2r6n" Apr 13 19:24:07.713546 kubelet[2577]: I0413 19:24:07.713455 2577 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/b7e65d94-d910-4098-accc-faa807a02ba1-socket-dir\") pod \"csi-node-driver-b2r6n\" (UID: \"b7e65d94-d910-4098-accc-faa807a02ba1\") " pod="calico-system/csi-node-driver-b2r6n" Apr 13 19:24:07.721325 kubelet[2577]: E0413 19:24:07.719689 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:07.721325 kubelet[2577]: W0413 19:24:07.719720 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:07.721325 kubelet[2577]: E0413 19:24:07.719744 2577 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:07.722950 kubelet[2577]: E0413 19:24:07.722537 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:07.722950 kubelet[2577]: W0413 19:24:07.722945 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:07.723100 kubelet[2577]: E0413 19:24:07.722971 2577 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:07.731885 kubelet[2577]: E0413 19:24:07.725731 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:07.731885 kubelet[2577]: W0413 19:24:07.729476 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:07.731885 kubelet[2577]: E0413 19:24:07.729517 2577 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:07.733651 kubelet[2577]: E0413 19:24:07.733595 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:07.733651 kubelet[2577]: W0413 19:24:07.733631 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:07.733819 kubelet[2577]: E0413 19:24:07.733670 2577 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:07.735963 kubelet[2577]: E0413 19:24:07.735780 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:07.735963 kubelet[2577]: W0413 19:24:07.735939 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:07.736122 kubelet[2577]: E0413 19:24:07.735976 2577 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:07.737473 kubelet[2577]: E0413 19:24:07.737432 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:07.738263 kubelet[2577]: W0413 19:24:07.737464 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:07.738263 kubelet[2577]: E0413 19:24:07.738017 2577 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:07.739647 kubelet[2577]: E0413 19:24:07.739617 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:07.739997 kubelet[2577]: W0413 19:24:07.739738 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:07.739997 kubelet[2577]: E0413 19:24:07.739764 2577 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:07.741633 kubelet[2577]: E0413 19:24:07.741531 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:07.741633 kubelet[2577]: W0413 19:24:07.741565 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:07.741633 kubelet[2577]: E0413 19:24:07.741603 2577 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:07.742452 kubelet[2577]: E0413 19:24:07.741973 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:07.742452 kubelet[2577]: W0413 19:24:07.742010 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:07.742452 kubelet[2577]: E0413 19:24:07.742033 2577 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:07.759570 kubelet[2577]: E0413 19:24:07.759525 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:07.759570 kubelet[2577]: W0413 19:24:07.759561 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:07.759855 kubelet[2577]: E0413 19:24:07.759587 2577 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:07.760178 containerd[1480]: time="2026-04-13T19:24:07.759208895Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 19:24:07.761519 containerd[1480]: time="2026-04-13T19:24:07.761215285Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 19:24:07.761519 containerd[1480]: time="2026-04-13T19:24:07.761250164Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:24:07.761519 containerd[1480]: time="2026-04-13T19:24:07.761373081Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:24:07.782764 systemd[1]: Started cri-containerd-edede67ac9be48488beadd874c952462edc749ea13fc4b0cc5f9aca3d51ca28e.scope - libcontainer container edede67ac9be48488beadd874c952462edc749ea13fc4b0cc5f9aca3d51ca28e. Apr 13 19:24:07.815475 kubelet[2577]: E0413 19:24:07.814715 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:07.815475 kubelet[2577]: W0413 19:24:07.814840 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:07.815475 kubelet[2577]: E0413 19:24:07.814881 2577 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:07.815672 kubelet[2577]: E0413 19:24:07.815542 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:07.815672 kubelet[2577]: W0413 19:24:07.815556 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:07.815672 kubelet[2577]: E0413 19:24:07.815663 2577 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:07.816922 kubelet[2577]: E0413 19:24:07.816840 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:07.817120 kubelet[2577]: W0413 19:24:07.817083 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:07.817171 kubelet[2577]: E0413 19:24:07.817127 2577 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:07.818146 kubelet[2577]: E0413 19:24:07.818116 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:07.818440 kubelet[2577]: W0413 19:24:07.818138 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:07.818576 kubelet[2577]: E0413 19:24:07.818550 2577 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:07.819207 kubelet[2577]: E0413 19:24:07.819176 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:07.819207 kubelet[2577]: W0413 19:24:07.819197 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:07.819207 kubelet[2577]: E0413 19:24:07.819212 2577 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:07.819398 containerd[1480]: time="2026-04-13T19:24:07.819354702Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-mb6t5,Uid:1a89a649-3173-4927-bfd1-b7b666459310,Namespace:calico-system,Attempt:0,}" Apr 13 19:24:07.820686 kubelet[2577]: E0413 19:24:07.820657 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:07.820686 kubelet[2577]: W0413 19:24:07.820681 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:07.820927 kubelet[2577]: E0413 19:24:07.820696 2577 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:07.821467 kubelet[2577]: E0413 19:24:07.821328 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:07.821467 kubelet[2577]: W0413 19:24:07.821345 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:07.821467 kubelet[2577]: E0413 19:24:07.821359 2577 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:07.822037 kubelet[2577]: E0413 19:24:07.821856 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:07.822037 kubelet[2577]: W0413 19:24:07.821925 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:07.822037 kubelet[2577]: E0413 19:24:07.821943 2577 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:07.822692 kubelet[2577]: E0413 19:24:07.822675 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:07.822816 kubelet[2577]: W0413 19:24:07.822802 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:07.822951 kubelet[2577]: E0413 19:24:07.822883 2577 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:07.823224 kubelet[2577]: E0413 19:24:07.823209 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:07.823300 kubelet[2577]: W0413 19:24:07.823286 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:07.823356 kubelet[2577]: E0413 19:24:07.823344 2577 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:07.823768 kubelet[2577]: E0413 19:24:07.823615 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:07.823768 kubelet[2577]: W0413 19:24:07.823629 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:07.823768 kubelet[2577]: E0413 19:24:07.823641 2577 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:07.824206 kubelet[2577]: E0413 19:24:07.824065 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:07.824206 kubelet[2577]: W0413 19:24:07.824080 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:07.824206 kubelet[2577]: E0413 19:24:07.824094 2577 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:07.824811 kubelet[2577]: E0413 19:24:07.824695 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:07.824811 kubelet[2577]: W0413 19:24:07.824716 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:07.824811 kubelet[2577]: E0413 19:24:07.824730 2577 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:07.825538 kubelet[2577]: E0413 19:24:07.825511 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:07.826563 kubelet[2577]: W0413 19:24:07.826534 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:07.826563 kubelet[2577]: E0413 19:24:07.826570 2577 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:07.827533 kubelet[2577]: E0413 19:24:07.827481 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:07.829481 kubelet[2577]: W0413 19:24:07.829447 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:07.829580 kubelet[2577]: E0413 19:24:07.829492 2577 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:07.830277 kubelet[2577]: E0413 19:24:07.830254 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:07.830277 kubelet[2577]: W0413 19:24:07.830278 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:07.830466 containerd[1480]: time="2026-04-13T19:24:07.830429943Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6f5c785988-xl8vl,Uid:2570c5bd-7155-4c40-9ab4-48d263d3ec31,Namespace:calico-system,Attempt:0,} returns sandbox id \"edede67ac9be48488beadd874c952462edc749ea13fc4b0cc5f9aca3d51ca28e\"" Apr 13 19:24:07.831131 kubelet[2577]: E0413 19:24:07.831076 2577 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:07.832475 kubelet[2577]: E0413 19:24:07.831830 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:07.832475 kubelet[2577]: W0413 19:24:07.832048 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:07.832475 kubelet[2577]: E0413 19:24:07.832073 2577 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:07.833591 kubelet[2577]: E0413 19:24:07.833569 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:07.833591 kubelet[2577]: W0413 19:24:07.833589 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:07.833801 kubelet[2577]: E0413 19:24:07.833607 2577 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:07.834644 kubelet[2577]: E0413 19:24:07.834608 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:07.834715 kubelet[2577]: W0413 19:24:07.834637 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:07.834715 kubelet[2577]: E0413 19:24:07.834666 2577 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:07.835392 kubelet[2577]: E0413 19:24:07.834980 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:07.835392 kubelet[2577]: W0413 19:24:07.835012 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:07.835392 kubelet[2577]: E0413 19:24:07.835025 2577 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:07.837139 kubelet[2577]: E0413 19:24:07.837083 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:07.837139 kubelet[2577]: W0413 19:24:07.837107 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:07.837139 kubelet[2577]: E0413 19:24:07.837127 2577 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:07.838465 kubelet[2577]: E0413 19:24:07.838440 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:07.838465 kubelet[2577]: W0413 19:24:07.838460 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:07.838580 kubelet[2577]: E0413 19:24:07.838477 2577 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:07.839377 kubelet[2577]: E0413 19:24:07.839330 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:07.839377 kubelet[2577]: W0413 19:24:07.839354 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:07.839377 kubelet[2577]: E0413 19:24:07.839369 2577 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:07.840784 containerd[1480]: time="2026-04-13T19:24:07.840742244Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\"" Apr 13 19:24:07.841699 kubelet[2577]: E0413 19:24:07.841346 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:07.841699 kubelet[2577]: W0413 19:24:07.841371 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:07.841699 kubelet[2577]: E0413 19:24:07.841392 2577 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:07.842663 kubelet[2577]: E0413 19:24:07.842053 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:07.842663 kubelet[2577]: W0413 19:24:07.842081 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:07.842663 kubelet[2577]: E0413 19:24:07.842097 2577 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:07.862753 kubelet[2577]: E0413 19:24:07.862717 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:07.862753 kubelet[2577]: W0413 19:24:07.862745 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:07.862958 kubelet[2577]: E0413 19:24:07.862771 2577 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:07.873541 containerd[1480]: time="2026-04-13T19:24:07.873371183Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 19:24:07.873748 containerd[1480]: time="2026-04-13T19:24:07.873511460Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 19:24:07.873748 containerd[1480]: time="2026-04-13T19:24:07.873715335Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:24:07.874330 containerd[1480]: time="2026-04-13T19:24:07.874233961Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:24:07.895112 systemd[1]: Started cri-containerd-17630215ad2191f36d7aa2b017754a09e69113bca47cf0c2d821255fea931f23.scope - libcontainer container 17630215ad2191f36d7aa2b017754a09e69113bca47cf0c2d821255fea931f23. Apr 13 19:24:07.932337 containerd[1480]: time="2026-04-13T19:24:07.932286741Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-mb6t5,Uid:1a89a649-3173-4927-bfd1-b7b666459310,Namespace:calico-system,Attempt:0,} returns sandbox id \"17630215ad2191f36d7aa2b017754a09e69113bca47cf0c2d821255fea931f23\"" Apr 13 19:24:09.365221 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount227932476.mount: Deactivated successfully. Apr 13 19:24:09.470296 kubelet[2577]: E0413 19:24:09.470248 2577 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-b2r6n" podUID="b7e65d94-d910-4098-accc-faa807a02ba1" Apr 13 19:24:09.878862 containerd[1480]: time="2026-04-13T19:24:09.877984115Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:24:09.879836 containerd[1480]: time="2026-04-13T19:24:09.879731637Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.31.4: active requests=0, bytes read=33865174" Apr 13 19:24:09.881346 containerd[1480]: time="2026-04-13T19:24:09.881278202Z" level=info msg="ImageCreate event name:\"sha256:e836e1dea560d4c477b347f1c93c245aec618361306b23eda1d6bb7665476182\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:24:09.884362 containerd[1480]: time="2026-04-13T19:24:09.884316255Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:24:09.885542 containerd[1480]: time="2026-04-13T19:24:09.885258114Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.31.4\" with image id \"sha256:e836e1dea560d4c477b347f1c93c245aec618361306b23eda1d6bb7665476182\", repo tag \"ghcr.io/flatcar/calico/typha:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\", size \"33865028\" in 2.044392114s" Apr 13 19:24:09.885542 containerd[1480]: time="2026-04-13T19:24:09.885298113Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\" returns image reference \"sha256:e836e1dea560d4c477b347f1c93c245aec618361306b23eda1d6bb7665476182\"" Apr 13 19:24:09.889826 containerd[1480]: time="2026-04-13T19:24:09.888543802Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\"" Apr 13 19:24:09.907302 containerd[1480]: time="2026-04-13T19:24:09.907250148Z" level=info msg="CreateContainer within sandbox \"edede67ac9be48488beadd874c952462edc749ea13fc4b0cc5f9aca3d51ca28e\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Apr 13 19:24:09.923893 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1096768293.mount: Deactivated successfully. Apr 13 19:24:09.932333 containerd[1480]: time="2026-04-13T19:24:09.932259755Z" level=info msg="CreateContainer within sandbox \"edede67ac9be48488beadd874c952462edc749ea13fc4b0cc5f9aca3d51ca28e\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"688822c70a398880de5e657bfbc92938dc055ea6e8d6aa56d44ad5932d020027\"" Apr 13 19:24:09.935495 containerd[1480]: time="2026-04-13T19:24:09.935281608Z" level=info msg="StartContainer for \"688822c70a398880de5e657bfbc92938dc055ea6e8d6aa56d44ad5932d020027\"" Apr 13 19:24:09.969664 systemd[1]: Started cri-containerd-688822c70a398880de5e657bfbc92938dc055ea6e8d6aa56d44ad5932d020027.scope - libcontainer container 688822c70a398880de5e657bfbc92938dc055ea6e8d6aa56d44ad5932d020027. Apr 13 19:24:10.011676 containerd[1480]: time="2026-04-13T19:24:10.011627814Z" level=info msg="StartContainer for \"688822c70a398880de5e657bfbc92938dc055ea6e8d6aa56d44ad5932d020027\" returns successfully" Apr 13 19:24:10.595046 kubelet[2577]: I0413 19:24:10.594949 2577 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-typha-6f5c785988-xl8vl" podStartSLOduration=1.548420699 podStartE2EDuration="3.594935443s" podCreationTimestamp="2026-04-13 19:24:07 +0000 UTC" firstStartedPulling="2026-04-13 19:24:07.840239937 +0000 UTC m=+23.513723373" lastFinishedPulling="2026-04-13 19:24:09.886754681 +0000 UTC m=+25.560238117" observedRunningTime="2026-04-13 19:24:10.594087501 +0000 UTC m=+26.267570937" watchObservedRunningTime="2026-04-13 19:24:10.594935443 +0000 UTC m=+26.268418879" Apr 13 19:24:10.612860 kubelet[2577]: E0413 19:24:10.612823 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:10.613016 kubelet[2577]: W0413 19:24:10.612851 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:10.613016 kubelet[2577]: E0413 19:24:10.612900 2577 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:10.613150 kubelet[2577]: E0413 19:24:10.613110 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:10.613150 kubelet[2577]: W0413 19:24:10.613121 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:10.613150 kubelet[2577]: E0413 19:24:10.613147 2577 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:10.613328 kubelet[2577]: E0413 19:24:10.613316 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:10.613328 kubelet[2577]: W0413 19:24:10.613327 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:10.613439 kubelet[2577]: E0413 19:24:10.613336 2577 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:10.613547 kubelet[2577]: E0413 19:24:10.613533 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:10.613547 kubelet[2577]: W0413 19:24:10.613546 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:10.613608 kubelet[2577]: E0413 19:24:10.613555 2577 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:10.613761 kubelet[2577]: E0413 19:24:10.613748 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:10.613761 kubelet[2577]: W0413 19:24:10.613760 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:10.613886 kubelet[2577]: E0413 19:24:10.613775 2577 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:10.614026 kubelet[2577]: E0413 19:24:10.614011 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:10.614026 kubelet[2577]: W0413 19:24:10.614026 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:10.614100 kubelet[2577]: E0413 19:24:10.614037 2577 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:10.614238 kubelet[2577]: E0413 19:24:10.614224 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:10.614238 kubelet[2577]: W0413 19:24:10.614236 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:10.614328 kubelet[2577]: E0413 19:24:10.614246 2577 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:10.614439 kubelet[2577]: E0413 19:24:10.614425 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:10.614439 kubelet[2577]: W0413 19:24:10.614436 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:10.614538 kubelet[2577]: E0413 19:24:10.614446 2577 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:10.614638 kubelet[2577]: E0413 19:24:10.614626 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:10.614638 kubelet[2577]: W0413 19:24:10.614637 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:10.614726 kubelet[2577]: E0413 19:24:10.614646 2577 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:10.614839 kubelet[2577]: E0413 19:24:10.614827 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:10.614839 kubelet[2577]: W0413 19:24:10.614839 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:10.614913 kubelet[2577]: E0413 19:24:10.614849 2577 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:10.615030 kubelet[2577]: E0413 19:24:10.615019 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:10.615030 kubelet[2577]: W0413 19:24:10.615029 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:10.615112 kubelet[2577]: E0413 19:24:10.615047 2577 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:10.615216 kubelet[2577]: E0413 19:24:10.615204 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:10.615216 kubelet[2577]: W0413 19:24:10.615215 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:10.615295 kubelet[2577]: E0413 19:24:10.615224 2577 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:10.615394 kubelet[2577]: E0413 19:24:10.615383 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:10.615394 kubelet[2577]: W0413 19:24:10.615394 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:10.615531 kubelet[2577]: E0413 19:24:10.615402 2577 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:10.615692 kubelet[2577]: E0413 19:24:10.615679 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:10.615722 kubelet[2577]: W0413 19:24:10.615692 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:10.615722 kubelet[2577]: E0413 19:24:10.615710 2577 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:10.615954 kubelet[2577]: E0413 19:24:10.615941 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:10.616004 kubelet[2577]: W0413 19:24:10.615953 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:10.616004 kubelet[2577]: E0413 19:24:10.615969 2577 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:10.647956 kubelet[2577]: E0413 19:24:10.647669 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:10.647956 kubelet[2577]: W0413 19:24:10.647709 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:10.647956 kubelet[2577]: E0413 19:24:10.647743 2577 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:10.648237 kubelet[2577]: E0413 19:24:10.648219 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:10.648310 kubelet[2577]: W0413 19:24:10.648295 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:10.648389 kubelet[2577]: E0413 19:24:10.648375 2577 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:10.648840 kubelet[2577]: E0413 19:24:10.648767 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:10.648907 kubelet[2577]: W0413 19:24:10.648847 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:10.648907 kubelet[2577]: E0413 19:24:10.648869 2577 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:10.649253 kubelet[2577]: E0413 19:24:10.649232 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:10.649253 kubelet[2577]: W0413 19:24:10.649248 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:10.649339 kubelet[2577]: E0413 19:24:10.649261 2577 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:10.649673 kubelet[2577]: E0413 19:24:10.649650 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:10.649673 kubelet[2577]: W0413 19:24:10.649667 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:10.649744 kubelet[2577]: E0413 19:24:10.649701 2577 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:10.650054 kubelet[2577]: E0413 19:24:10.650033 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:10.650054 kubelet[2577]: W0413 19:24:10.650052 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:10.650159 kubelet[2577]: E0413 19:24:10.650070 2577 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:10.650493 kubelet[2577]: E0413 19:24:10.650475 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:10.650541 kubelet[2577]: W0413 19:24:10.650503 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:10.650541 kubelet[2577]: E0413 19:24:10.650517 2577 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:10.650778 kubelet[2577]: E0413 19:24:10.650763 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:10.650778 kubelet[2577]: W0413 19:24:10.650777 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:10.650860 kubelet[2577]: E0413 19:24:10.650800 2577 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:10.651040 kubelet[2577]: E0413 19:24:10.651030 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:10.651040 kubelet[2577]: W0413 19:24:10.651040 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:10.651101 kubelet[2577]: E0413 19:24:10.651049 2577 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:10.651447 kubelet[2577]: E0413 19:24:10.651396 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:10.651483 kubelet[2577]: W0413 19:24:10.651447 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:10.651483 kubelet[2577]: E0413 19:24:10.651459 2577 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:10.651692 kubelet[2577]: E0413 19:24:10.651681 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:10.651692 kubelet[2577]: W0413 19:24:10.651692 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:10.651749 kubelet[2577]: E0413 19:24:10.651702 2577 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:10.652001 kubelet[2577]: E0413 19:24:10.651987 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:10.652001 kubelet[2577]: W0413 19:24:10.652001 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:10.652070 kubelet[2577]: E0413 19:24:10.652011 2577 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:10.652305 kubelet[2577]: E0413 19:24:10.652293 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:10.652305 kubelet[2577]: W0413 19:24:10.652305 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:10.652357 kubelet[2577]: E0413 19:24:10.652314 2577 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:10.652520 kubelet[2577]: E0413 19:24:10.652509 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:10.652520 kubelet[2577]: W0413 19:24:10.652519 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:10.652595 kubelet[2577]: E0413 19:24:10.652528 2577 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:10.652771 kubelet[2577]: E0413 19:24:10.652759 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:10.652835 kubelet[2577]: W0413 19:24:10.652772 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:10.652835 kubelet[2577]: E0413 19:24:10.652781 2577 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:10.653042 kubelet[2577]: E0413 19:24:10.653028 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:10.653042 kubelet[2577]: W0413 19:24:10.653041 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:10.653107 kubelet[2577]: E0413 19:24:10.653051 2577 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:10.653309 kubelet[2577]: E0413 19:24:10.653295 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:10.653339 kubelet[2577]: W0413 19:24:10.653309 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:10.653339 kubelet[2577]: E0413 19:24:10.653321 2577 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:10.653942 kubelet[2577]: E0413 19:24:10.653927 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:10.653942 kubelet[2577]: W0413 19:24:10.653940 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:10.654020 kubelet[2577]: E0413 19:24:10.653951 2577 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:11.451592 containerd[1480]: time="2026-04-13T19:24:11.451405272Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:24:11.453343 containerd[1480]: time="2026-04-13T19:24:11.453212197Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4: active requests=0, bytes read=4457682" Apr 13 19:24:11.455561 containerd[1480]: time="2026-04-13T19:24:11.454728048Z" level=info msg="ImageCreate event name:\"sha256:449a6463eaa02e13b190ef7c4057191febcc65ab9418bae3bc0995f5bce65798\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:24:11.458586 containerd[1480]: time="2026-04-13T19:24:11.458519534Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:24:11.460036 containerd[1480]: time="2026-04-13T19:24:11.459503155Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" with image id \"sha256:449a6463eaa02e13b190ef7c4057191febcc65ab9418bae3bc0995f5bce65798\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\", size \"5855167\" in 1.570909434s" Apr 13 19:24:11.460036 containerd[1480]: time="2026-04-13T19:24:11.459547154Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" returns image reference \"sha256:449a6463eaa02e13b190ef7c4057191febcc65ab9418bae3bc0995f5bce65798\"" Apr 13 19:24:11.468178 containerd[1480]: time="2026-04-13T19:24:11.468117787Z" level=info msg="CreateContainer within sandbox \"17630215ad2191f36d7aa2b017754a09e69113bca47cf0c2d821255fea931f23\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Apr 13 19:24:11.469314 kubelet[2577]: E0413 19:24:11.469242 2577 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-b2r6n" podUID="b7e65d94-d910-4098-accc-faa807a02ba1" Apr 13 19:24:11.494209 containerd[1480]: time="2026-04-13T19:24:11.494124802Z" level=info msg="CreateContainer within sandbox \"17630215ad2191f36d7aa2b017754a09e69113bca47cf0c2d821255fea931f23\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"ac940f46a8aac2c8f3caa7b484cc86a535f69f6aa06123f40f72ed4cf13daab3\"" Apr 13 19:24:11.496451 containerd[1480]: time="2026-04-13T19:24:11.495028544Z" level=info msg="StartContainer for \"ac940f46a8aac2c8f3caa7b484cc86a535f69f6aa06123f40f72ed4cf13daab3\"" Apr 13 19:24:11.533710 systemd[1]: Started cri-containerd-ac940f46a8aac2c8f3caa7b484cc86a535f69f6aa06123f40f72ed4cf13daab3.scope - libcontainer container ac940f46a8aac2c8f3caa7b484cc86a535f69f6aa06123f40f72ed4cf13daab3. Apr 13 19:24:11.567823 containerd[1480]: time="2026-04-13T19:24:11.567662733Z" level=info msg="StartContainer for \"ac940f46a8aac2c8f3caa7b484cc86a535f69f6aa06123f40f72ed4cf13daab3\" returns successfully" Apr 13 19:24:11.586786 systemd[1]: cri-containerd-ac940f46a8aac2c8f3caa7b484cc86a535f69f6aa06123f40f72ed4cf13daab3.scope: Deactivated successfully. Apr 13 19:24:11.634599 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ac940f46a8aac2c8f3caa7b484cc86a535f69f6aa06123f40f72ed4cf13daab3-rootfs.mount: Deactivated successfully. Apr 13 19:24:11.742560 containerd[1480]: time="2026-04-13T19:24:11.742262820Z" level=info msg="shim disconnected" id=ac940f46a8aac2c8f3caa7b484cc86a535f69f6aa06123f40f72ed4cf13daab3 namespace=k8s.io Apr 13 19:24:11.742560 containerd[1480]: time="2026-04-13T19:24:11.742343898Z" level=warning msg="cleaning up after shim disconnected" id=ac940f46a8aac2c8f3caa7b484cc86a535f69f6aa06123f40f72ed4cf13daab3 namespace=k8s.io Apr 13 19:24:11.742560 containerd[1480]: time="2026-04-13T19:24:11.742359458Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 13 19:24:12.590969 containerd[1480]: time="2026-04-13T19:24:12.590902564Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\"" Apr 13 19:24:13.470766 kubelet[2577]: E0413 19:24:13.470117 2577 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-b2r6n" podUID="b7e65d94-d910-4098-accc-faa807a02ba1" Apr 13 19:24:15.470282 kubelet[2577]: E0413 19:24:15.469603 2577 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-b2r6n" podUID="b7e65d94-d910-4098-accc-faa807a02ba1" Apr 13 19:24:17.229109 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4157081948.mount: Deactivated successfully. Apr 13 19:24:17.260496 containerd[1480]: time="2026-04-13T19:24:17.260363971Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:24:17.262108 containerd[1480]: time="2026-04-13T19:24:17.261938910Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.31.4: active requests=0, bytes read=153921674" Apr 13 19:24:17.264806 containerd[1480]: time="2026-04-13T19:24:17.263309092Z" level=info msg="ImageCreate event name:\"sha256:27be54f2b9e47d96c7e9e5ad16e26ec298c1829f31885c81a622d50472c8ac97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:24:17.267268 containerd[1480]: time="2026-04-13T19:24:17.266926924Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:24:17.267828 containerd[1480]: time="2026-04-13T19:24:17.267785273Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.31.4\" with image id \"sha256:27be54f2b9e47d96c7e9e5ad16e26ec298c1829f31885c81a622d50472c8ac97\", repo tag \"ghcr.io/flatcar/calico/node:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\", size \"153921536\" in 4.67684251s" Apr 13 19:24:17.267828 containerd[1480]: time="2026-04-13T19:24:17.267826032Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\" returns image reference \"sha256:27be54f2b9e47d96c7e9e5ad16e26ec298c1829f31885c81a622d50472c8ac97\"" Apr 13 19:24:17.274209 containerd[1480]: time="2026-04-13T19:24:17.274169109Z" level=info msg="CreateContainer within sandbox \"17630215ad2191f36d7aa2b017754a09e69113bca47cf0c2d821255fea931f23\" for container &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,}" Apr 13 19:24:17.292542 containerd[1480]: time="2026-04-13T19:24:17.292492307Z" level=info msg="CreateContainer within sandbox \"17630215ad2191f36d7aa2b017754a09e69113bca47cf0c2d821255fea931f23\" for &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,} returns container id \"9fd54812779e5e2c66d5cd14d5b84454d7095971367cad93db7c2e6dd11f11f7\"" Apr 13 19:24:17.295026 containerd[1480]: time="2026-04-13T19:24:17.294962914Z" level=info msg="StartContainer for \"9fd54812779e5e2c66d5cd14d5b84454d7095971367cad93db7c2e6dd11f11f7\"" Apr 13 19:24:17.335697 systemd[1]: Started cri-containerd-9fd54812779e5e2c66d5cd14d5b84454d7095971367cad93db7c2e6dd11f11f7.scope - libcontainer container 9fd54812779e5e2c66d5cd14d5b84454d7095971367cad93db7c2e6dd11f11f7. Apr 13 19:24:17.369951 containerd[1480]: time="2026-04-13T19:24:17.369900446Z" level=info msg="StartContainer for \"9fd54812779e5e2c66d5cd14d5b84454d7095971367cad93db7c2e6dd11f11f7\" returns successfully" Apr 13 19:24:17.469552 kubelet[2577]: E0413 19:24:17.469475 2577 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-b2r6n" podUID="b7e65d94-d910-4098-accc-faa807a02ba1" Apr 13 19:24:17.483875 systemd[1]: cri-containerd-9fd54812779e5e2c66d5cd14d5b84454d7095971367cad93db7c2e6dd11f11f7.scope: Deactivated successfully. Apr 13 19:24:17.657140 containerd[1480]: time="2026-04-13T19:24:17.657069937Z" level=info msg="shim disconnected" id=9fd54812779e5e2c66d5cd14d5b84454d7095971367cad93db7c2e6dd11f11f7 namespace=k8s.io Apr 13 19:24:17.657718 containerd[1480]: time="2026-04-13T19:24:17.657452052Z" level=warning msg="cleaning up after shim disconnected" id=9fd54812779e5e2c66d5cd14d5b84454d7095971367cad93db7c2e6dd11f11f7 namespace=k8s.io Apr 13 19:24:17.657718 containerd[1480]: time="2026-04-13T19:24:17.657476091Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 13 19:24:18.230595 systemd[1]: run-containerd-runc-k8s.io-9fd54812779e5e2c66d5cd14d5b84454d7095971367cad93db7c2e6dd11f11f7-runc.sfyKI9.mount: Deactivated successfully. Apr 13 19:24:18.230734 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9fd54812779e5e2c66d5cd14d5b84454d7095971367cad93db7c2e6dd11f11f7-rootfs.mount: Deactivated successfully. Apr 13 19:24:18.622339 containerd[1480]: time="2026-04-13T19:24:18.622300834Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\"" Apr 13 19:24:19.469556 kubelet[2577]: E0413 19:24:19.469504 2577 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-b2r6n" podUID="b7e65d94-d910-4098-accc-faa807a02ba1" Apr 13 19:24:21.201254 containerd[1480]: time="2026-04-13T19:24:21.200334730Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:24:21.202816 containerd[1480]: time="2026-04-13T19:24:21.202734665Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.31.4: active requests=0, bytes read=66009216" Apr 13 19:24:21.204210 containerd[1480]: time="2026-04-13T19:24:21.204171691Z" level=info msg="ImageCreate event name:\"sha256:c10bed152367fad8c19e9400f12b748d6fbc20498086983df13e70e36f24511b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:24:21.207991 containerd[1480]: time="2026-04-13T19:24:21.207941372Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:24:21.208785 containerd[1480]: time="2026-04-13T19:24:21.208731244Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.31.4\" with image id \"sha256:c10bed152367fad8c19e9400f12b748d6fbc20498086983df13e70e36f24511b\", repo tag \"ghcr.io/flatcar/calico/cni:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\", size \"67406741\" in 2.585929056s" Apr 13 19:24:21.208785 containerd[1480]: time="2026-04-13T19:24:21.208781564Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\" returns image reference \"sha256:c10bed152367fad8c19e9400f12b748d6fbc20498086983df13e70e36f24511b\"" Apr 13 19:24:21.216762 containerd[1480]: time="2026-04-13T19:24:21.216713283Z" level=info msg="CreateContainer within sandbox \"17630215ad2191f36d7aa2b017754a09e69113bca47cf0c2d821255fea931f23\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Apr 13 19:24:21.235693 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount706185377.mount: Deactivated successfully. Apr 13 19:24:21.236573 containerd[1480]: time="2026-04-13T19:24:21.236389242Z" level=info msg="CreateContainer within sandbox \"17630215ad2191f36d7aa2b017754a09e69113bca47cf0c2d821255fea931f23\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"8e6a3544d5d8573e318a00cc344aea3507cf690bbb5d870142c672bf5b0e1ad8\"" Apr 13 19:24:21.238222 containerd[1480]: time="2026-04-13T19:24:21.238084065Z" level=info msg="StartContainer for \"8e6a3544d5d8573e318a00cc344aea3507cf690bbb5d870142c672bf5b0e1ad8\"" Apr 13 19:24:21.278820 systemd[1]: Started cri-containerd-8e6a3544d5d8573e318a00cc344aea3507cf690bbb5d870142c672bf5b0e1ad8.scope - libcontainer container 8e6a3544d5d8573e318a00cc344aea3507cf690bbb5d870142c672bf5b0e1ad8. Apr 13 19:24:21.315689 containerd[1480]: time="2026-04-13T19:24:21.315630115Z" level=info msg="StartContainer for \"8e6a3544d5d8573e318a00cc344aea3507cf690bbb5d870142c672bf5b0e1ad8\" returns successfully" Apr 13 19:24:21.469762 kubelet[2577]: E0413 19:24:21.469623 2577 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-b2r6n" podUID="b7e65d94-d910-4098-accc-faa807a02ba1" Apr 13 19:24:21.902573 containerd[1480]: time="2026-04-13T19:24:21.902517573Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 13 19:24:21.905472 systemd[1]: cri-containerd-8e6a3544d5d8573e318a00cc344aea3507cf690bbb5d870142c672bf5b0e1ad8.scope: Deactivated successfully. Apr 13 19:24:21.929555 kubelet[2577]: I0413 19:24:21.929486 2577 kubelet_node_status.go:427] "Fast updating node status as it just became ready" Apr 13 19:24:21.932544 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8e6a3544d5d8573e318a00cc344aea3507cf690bbb5d870142c672bf5b0e1ad8-rootfs.mount: Deactivated successfully. Apr 13 19:24:22.019393 containerd[1480]: time="2026-04-13T19:24:22.019222555Z" level=info msg="shim disconnected" id=8e6a3544d5d8573e318a00cc344aea3507cf690bbb5d870142c672bf5b0e1ad8 namespace=k8s.io Apr 13 19:24:22.019393 containerd[1480]: time="2026-04-13T19:24:22.019280234Z" level=warning msg="cleaning up after shim disconnected" id=8e6a3544d5d8573e318a00cc344aea3507cf690bbb5d870142c672bf5b0e1ad8 namespace=k8s.io Apr 13 19:24:22.019393 containerd[1480]: time="2026-04-13T19:24:22.019288594Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 13 19:24:22.025371 systemd[1]: Created slice kubepods-besteffort-podc4e502e5_dd94_49d8_bd2f_926114e20b59.slice - libcontainer container kubepods-besteffort-podc4e502e5_dd94_49d8_bd2f_926114e20b59.slice. Apr 13 19:24:22.037910 systemd[1]: Created slice kubepods-burstable-pod69279b5d_27a1_4477_a9f1_aa1c68505c9c.slice - libcontainer container kubepods-burstable-pod69279b5d_27a1_4477_a9f1_aa1c68505c9c.slice. Apr 13 19:24:22.052049 systemd[1]: Created slice kubepods-besteffort-pod6839a9ef_85b8_4ddf_8641_0352b0c9dd4c.slice - libcontainer container kubepods-besteffort-pod6839a9ef_85b8_4ddf_8641_0352b0c9dd4c.slice. Apr 13 19:24:22.065935 systemd[1]: Created slice kubepods-besteffort-poda434ed07_82db_484c_93a0_fd9b39b96e37.slice - libcontainer container kubepods-besteffort-poda434ed07_82db_484c_93a0_fd9b39b96e37.slice. Apr 13 19:24:22.073602 systemd[1]: Created slice kubepods-besteffort-podb2a4adae_1965_4b2d_8d19_84656dfbab7d.slice - libcontainer container kubepods-besteffort-podb2a4adae_1965_4b2d_8d19_84656dfbab7d.slice. Apr 13 19:24:22.084078 systemd[1]: Created slice kubepods-burstable-poda4204cf2_a1a4_42c4_99b7_217ce68ed464.slice - libcontainer container kubepods-burstable-poda4204cf2_a1a4_42c4_99b7_217ce68ed464.slice. Apr 13 19:24:22.092834 systemd[1]: Created slice kubepods-besteffort-pod46a789ab_50b2_4393_9d38_e8e911daa169.slice - libcontainer container kubepods-besteffort-pod46a789ab_50b2_4393_9d38_e8e911daa169.slice. Apr 13 19:24:22.141730 kubelet[2577]: I0413 19:24:22.141474 2577 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a4204cf2-a1a4-42c4-99b7-217ce68ed464-config-volume\") pod \"coredns-7d764666f9-c7f65\" (UID: \"a4204cf2-a1a4-42c4-99b7-217ce68ed464\") " pod="kube-system/coredns-7d764666f9-c7f65" Apr 13 19:24:22.141730 kubelet[2577]: I0413 19:24:22.141521 2577 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/46a789ab-50b2-4393-9d38-e8e911daa169-goldmane-key-pair\") pod \"goldmane-9f7667bb8-224z2\" (UID: \"46a789ab-50b2-4393-9d38-e8e911daa169\") " pod="calico-system/goldmane-9f7667bb8-224z2" Apr 13 19:24:22.141730 kubelet[2577]: I0413 19:24:22.141544 2577 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/a434ed07-82db-484c-93a0-fd9b39b96e37-calico-apiserver-certs\") pod \"calico-apiserver-dcfc9864-vmpn9\" (UID: \"a434ed07-82db-484c-93a0-fd9b39b96e37\") " pod="calico-system/calico-apiserver-dcfc9864-vmpn9" Apr 13 19:24:22.141730 kubelet[2577]: I0413 19:24:22.141564 2577 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-96wbl\" (UniqueName: \"kubernetes.io/projected/a4204cf2-a1a4-42c4-99b7-217ce68ed464-kube-api-access-96wbl\") pod \"coredns-7d764666f9-c7f65\" (UID: \"a4204cf2-a1a4-42c4-99b7-217ce68ed464\") " pod="kube-system/coredns-7d764666f9-c7f65" Apr 13 19:24:22.141730 kubelet[2577]: I0413 19:24:22.141618 2577 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/69279b5d-27a1-4477-a9f1-aa1c68505c9c-config-volume\") pod \"coredns-7d764666f9-ngg5x\" (UID: \"69279b5d-27a1-4477-a9f1-aa1c68505c9c\") " pod="kube-system/coredns-7d764666f9-ngg5x" Apr 13 19:24:22.142233 kubelet[2577]: I0413 19:24:22.141641 2577 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-224dm\" (UniqueName: \"kubernetes.io/projected/69279b5d-27a1-4477-a9f1-aa1c68505c9c-kube-api-access-224dm\") pod \"coredns-7d764666f9-ngg5x\" (UID: \"69279b5d-27a1-4477-a9f1-aa1c68505c9c\") " pod="kube-system/coredns-7d764666f9-ngg5x" Apr 13 19:24:22.142233 kubelet[2577]: I0413 19:24:22.141663 2577 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6839a9ef-85b8-4ddf-8641-0352b0c9dd4c-tigera-ca-bundle\") pod \"calico-kube-controllers-5d6784dd66-rscc9\" (UID: \"6839a9ef-85b8-4ddf-8641-0352b0c9dd4c\") " pod="calico-system/calico-kube-controllers-5d6784dd66-rscc9" Apr 13 19:24:22.142233 kubelet[2577]: I0413 19:24:22.141686 2577 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/c4e502e5-dd94-49d8-bd2f-926114e20b59-whisker-backend-key-pair\") pod \"whisker-7cd4c54cd5-vb8pw\" (UID: \"c4e502e5-dd94-49d8-bd2f-926114e20b59\") " pod="calico-system/whisker-7cd4c54cd5-vb8pw" Apr 13 19:24:22.142233 kubelet[2577]: I0413 19:24:22.141705 2577 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6rsc9\" (UniqueName: \"kubernetes.io/projected/6839a9ef-85b8-4ddf-8641-0352b0c9dd4c-kube-api-access-6rsc9\") pod \"calico-kube-controllers-5d6784dd66-rscc9\" (UID: \"6839a9ef-85b8-4ddf-8641-0352b0c9dd4c\") " pod="calico-system/calico-kube-controllers-5d6784dd66-rscc9" Apr 13 19:24:22.142233 kubelet[2577]: I0413 19:24:22.141726 2577 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/46a789ab-50b2-4393-9d38-e8e911daa169-goldmane-ca-bundle\") pod \"goldmane-9f7667bb8-224z2\" (UID: \"46a789ab-50b2-4393-9d38-e8e911daa169\") " pod="calico-system/goldmane-9f7667bb8-224z2" Apr 13 19:24:22.143944 kubelet[2577]: I0413 19:24:22.141747 2577 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cxpvv\" (UniqueName: \"kubernetes.io/projected/a434ed07-82db-484c-93a0-fd9b39b96e37-kube-api-access-cxpvv\") pod \"calico-apiserver-dcfc9864-vmpn9\" (UID: \"a434ed07-82db-484c-93a0-fd9b39b96e37\") " pod="calico-system/calico-apiserver-dcfc9864-vmpn9" Apr 13 19:24:22.143944 kubelet[2577]: I0413 19:24:22.141770 2577 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/b2a4adae-1965-4b2d-8d19-84656dfbab7d-calico-apiserver-certs\") pod \"calico-apiserver-dcfc9864-8nhqr\" (UID: \"b2a4adae-1965-4b2d-8d19-84656dfbab7d\") " pod="calico-system/calico-apiserver-dcfc9864-8nhqr" Apr 13 19:24:22.143944 kubelet[2577]: I0413 19:24:22.141789 2577 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/c4e502e5-dd94-49d8-bd2f-926114e20b59-nginx-config\") pod \"whisker-7cd4c54cd5-vb8pw\" (UID: \"c4e502e5-dd94-49d8-bd2f-926114e20b59\") " pod="calico-system/whisker-7cd4c54cd5-vb8pw" Apr 13 19:24:22.143944 kubelet[2577]: I0413 19:24:22.141811 2577 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c4e502e5-dd94-49d8-bd2f-926114e20b59-whisker-ca-bundle\") pod \"whisker-7cd4c54cd5-vb8pw\" (UID: \"c4e502e5-dd94-49d8-bd2f-926114e20b59\") " pod="calico-system/whisker-7cd4c54cd5-vb8pw" Apr 13 19:24:22.143944 kubelet[2577]: I0413 19:24:22.141833 2577 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/46a789ab-50b2-4393-9d38-e8e911daa169-config\") pod \"goldmane-9f7667bb8-224z2\" (UID: \"46a789ab-50b2-4393-9d38-e8e911daa169\") " pod="calico-system/goldmane-9f7667bb8-224z2" Apr 13 19:24:22.144255 kubelet[2577]: I0413 19:24:22.141852 2577 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bfctv\" (UniqueName: \"kubernetes.io/projected/46a789ab-50b2-4393-9d38-e8e911daa169-kube-api-access-bfctv\") pod \"goldmane-9f7667bb8-224z2\" (UID: \"46a789ab-50b2-4393-9d38-e8e911daa169\") " pod="calico-system/goldmane-9f7667bb8-224z2" Apr 13 19:24:22.144255 kubelet[2577]: I0413 19:24:22.141879 2577 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hnjzl\" (UniqueName: \"kubernetes.io/projected/c4e502e5-dd94-49d8-bd2f-926114e20b59-kube-api-access-hnjzl\") pod \"whisker-7cd4c54cd5-vb8pw\" (UID: \"c4e502e5-dd94-49d8-bd2f-926114e20b59\") " pod="calico-system/whisker-7cd4c54cd5-vb8pw" Apr 13 19:24:22.144255 kubelet[2577]: I0413 19:24:22.141897 2577 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bvwtz\" (UniqueName: \"kubernetes.io/projected/b2a4adae-1965-4b2d-8d19-84656dfbab7d-kube-api-access-bvwtz\") pod \"calico-apiserver-dcfc9864-8nhqr\" (UID: \"b2a4adae-1965-4b2d-8d19-84656dfbab7d\") " pod="calico-system/calico-apiserver-dcfc9864-8nhqr" Apr 13 19:24:22.337151 containerd[1480]: time="2026-04-13T19:24:22.336694002Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7cd4c54cd5-vb8pw,Uid:c4e502e5-dd94-49d8-bd2f-926114e20b59,Namespace:calico-system,Attempt:0,}" Apr 13 19:24:22.349140 containerd[1480]: time="2026-04-13T19:24:22.348694327Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-ngg5x,Uid:69279b5d-27a1-4477-a9f1-aa1c68505c9c,Namespace:kube-system,Attempt:0,}" Apr 13 19:24:22.363499 containerd[1480]: time="2026-04-13T19:24:22.363353787Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5d6784dd66-rscc9,Uid:6839a9ef-85b8-4ddf-8641-0352b0c9dd4c,Namespace:calico-system,Attempt:0,}" Apr 13 19:24:22.373285 containerd[1480]: time="2026-04-13T19:24:22.373036774Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-dcfc9864-vmpn9,Uid:a434ed07-82db-484c-93a0-fd9b39b96e37,Namespace:calico-system,Attempt:0,}" Apr 13 19:24:22.388907 containerd[1480]: time="2026-04-13T19:24:22.388855063Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-dcfc9864-8nhqr,Uid:b2a4adae-1965-4b2d-8d19-84656dfbab7d,Namespace:calico-system,Attempt:0,}" Apr 13 19:24:22.391707 containerd[1480]: time="2026-04-13T19:24:22.391624557Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-c7f65,Uid:a4204cf2-a1a4-42c4-99b7-217ce68ed464,Namespace:kube-system,Attempt:0,}" Apr 13 19:24:22.399228 containerd[1480]: time="2026-04-13T19:24:22.398904367Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-9f7667bb8-224z2,Uid:46a789ab-50b2-4393-9d38-e8e911daa169,Namespace:calico-system,Attempt:0,}" Apr 13 19:24:22.536075 containerd[1480]: time="2026-04-13T19:24:22.536011217Z" level=error msg="Failed to destroy network for sandbox \"7d0f2d09e14532008ee44c4a7320d9bdbb5bd149bb257763a59e759ee437cb98\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 19:24:22.538438 containerd[1480]: time="2026-04-13T19:24:22.538317155Z" level=error msg="encountered an error cleaning up failed sandbox \"7d0f2d09e14532008ee44c4a7320d9bdbb5bd149bb257763a59e759ee437cb98\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 19:24:22.538909 containerd[1480]: time="2026-04-13T19:24:22.538750831Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7cd4c54cd5-vb8pw,Uid:c4e502e5-dd94-49d8-bd2f-926114e20b59,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"7d0f2d09e14532008ee44c4a7320d9bdbb5bd149bb257763a59e759ee437cb98\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 19:24:22.540591 kubelet[2577]: E0413 19:24:22.539152 2577 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7d0f2d09e14532008ee44c4a7320d9bdbb5bd149bb257763a59e759ee437cb98\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 19:24:22.540591 kubelet[2577]: E0413 19:24:22.539216 2577 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7d0f2d09e14532008ee44c4a7320d9bdbb5bd149bb257763a59e759ee437cb98\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7cd4c54cd5-vb8pw" Apr 13 19:24:22.540591 kubelet[2577]: E0413 19:24:22.539235 2577 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7d0f2d09e14532008ee44c4a7320d9bdbb5bd149bb257763a59e759ee437cb98\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7cd4c54cd5-vb8pw" Apr 13 19:24:22.540946 kubelet[2577]: E0413 19:24:22.539285 2577 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-7cd4c54cd5-vb8pw_calico-system(c4e502e5-dd94-49d8-bd2f-926114e20b59)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-7cd4c54cd5-vb8pw_calico-system(c4e502e5-dd94-49d8-bd2f-926114e20b59)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7d0f2d09e14532008ee44c4a7320d9bdbb5bd149bb257763a59e759ee437cb98\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-7cd4c54cd5-vb8pw" podUID="c4e502e5-dd94-49d8-bd2f-926114e20b59" Apr 13 19:24:22.545199 containerd[1480]: time="2026-04-13T19:24:22.543797423Z" level=error msg="Failed to destroy network for sandbox \"561f945af7426abfc16d5ccc5c2a5786df8b22dc85386ae68c08c2b430fa2f77\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 19:24:22.545832 containerd[1480]: time="2026-04-13T19:24:22.545619485Z" level=error msg="encountered an error cleaning up failed sandbox \"561f945af7426abfc16d5ccc5c2a5786df8b22dc85386ae68c08c2b430fa2f77\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 19:24:22.545832 containerd[1480]: time="2026-04-13T19:24:22.545775324Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-ngg5x,Uid:69279b5d-27a1-4477-a9f1-aa1c68505c9c,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"561f945af7426abfc16d5ccc5c2a5786df8b22dc85386ae68c08c2b430fa2f77\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 19:24:22.546538 kubelet[2577]: E0413 19:24:22.546236 2577 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"561f945af7426abfc16d5ccc5c2a5786df8b22dc85386ae68c08c2b430fa2f77\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 19:24:22.546538 kubelet[2577]: E0413 19:24:22.546375 2577 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"561f945af7426abfc16d5ccc5c2a5786df8b22dc85386ae68c08c2b430fa2f77\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7d764666f9-ngg5x" Apr 13 19:24:22.546538 kubelet[2577]: E0413 19:24:22.546393 2577 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"561f945af7426abfc16d5ccc5c2a5786df8b22dc85386ae68c08c2b430fa2f77\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7d764666f9-ngg5x" Apr 13 19:24:22.546737 kubelet[2577]: E0413 19:24:22.546484 2577 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7d764666f9-ngg5x_kube-system(69279b5d-27a1-4477-a9f1-aa1c68505c9c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7d764666f9-ngg5x_kube-system(69279b5d-27a1-4477-a9f1-aa1c68505c9c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"561f945af7426abfc16d5ccc5c2a5786df8b22dc85386ae68c08c2b430fa2f77\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7d764666f9-ngg5x" podUID="69279b5d-27a1-4477-a9f1-aa1c68505c9c" Apr 13 19:24:22.597975 containerd[1480]: time="2026-04-13T19:24:22.597765547Z" level=error msg="Failed to destroy network for sandbox \"a5235e47504c513cfa717aa5a80227a4b51127936d8ede6c41a55871982e7b83\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 19:24:22.598443 containerd[1480]: time="2026-04-13T19:24:22.598286702Z" level=error msg="encountered an error cleaning up failed sandbox \"a5235e47504c513cfa717aa5a80227a4b51127936d8ede6c41a55871982e7b83\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 19:24:22.598443 containerd[1480]: time="2026-04-13T19:24:22.598362701Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-9f7667bb8-224z2,Uid:46a789ab-50b2-4393-9d38-e8e911daa169,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a5235e47504c513cfa717aa5a80227a4b51127936d8ede6c41a55871982e7b83\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 19:24:22.599010 kubelet[2577]: E0413 19:24:22.598878 2577 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a5235e47504c513cfa717aa5a80227a4b51127936d8ede6c41a55871982e7b83\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 19:24:22.599010 kubelet[2577]: E0413 19:24:22.598942 2577 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a5235e47504c513cfa717aa5a80227a4b51127936d8ede6c41a55871982e7b83\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-9f7667bb8-224z2" Apr 13 19:24:22.599010 kubelet[2577]: E0413 19:24:22.598961 2577 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a5235e47504c513cfa717aa5a80227a4b51127936d8ede6c41a55871982e7b83\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-9f7667bb8-224z2" Apr 13 19:24:22.601254 kubelet[2577]: E0413 19:24:22.599292 2577 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-9f7667bb8-224z2_calico-system(46a789ab-50b2-4393-9d38-e8e911daa169)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-9f7667bb8-224z2_calico-system(46a789ab-50b2-4393-9d38-e8e911daa169)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a5235e47504c513cfa717aa5a80227a4b51127936d8ede6c41a55871982e7b83\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-9f7667bb8-224z2" podUID="46a789ab-50b2-4393-9d38-e8e911daa169" Apr 13 19:24:22.620431 containerd[1480]: time="2026-04-13T19:24:22.620364611Z" level=error msg="Failed to destroy network for sandbox \"8060aebaa14993bbca32c6fd4d64719cf66229be1c4e4908e5327028d6c23f64\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 19:24:22.620768 containerd[1480]: time="2026-04-13T19:24:22.620735648Z" level=error msg="encountered an error cleaning up failed sandbox \"8060aebaa14993bbca32c6fd4d64719cf66229be1c4e4908e5327028d6c23f64\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 19:24:22.620830 containerd[1480]: time="2026-04-13T19:24:22.620792047Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-dcfc9864-vmpn9,Uid:a434ed07-82db-484c-93a0-fd9b39b96e37,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"8060aebaa14993bbca32c6fd4d64719cf66229be1c4e4908e5327028d6c23f64\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 19:24:22.621110 kubelet[2577]: E0413 19:24:22.621064 2577 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8060aebaa14993bbca32c6fd4d64719cf66229be1c4e4908e5327028d6c23f64\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 19:24:22.621179 kubelet[2577]: E0413 19:24:22.621131 2577 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8060aebaa14993bbca32c6fd4d64719cf66229be1c4e4908e5327028d6c23f64\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-dcfc9864-vmpn9" Apr 13 19:24:22.621179 kubelet[2577]: E0413 19:24:22.621150 2577 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8060aebaa14993bbca32c6fd4d64719cf66229be1c4e4908e5327028d6c23f64\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-dcfc9864-vmpn9" Apr 13 19:24:22.621244 kubelet[2577]: E0413 19:24:22.621206 2577 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-dcfc9864-vmpn9_calico-system(a434ed07-82db-484c-93a0-fd9b39b96e37)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-dcfc9864-vmpn9_calico-system(a434ed07-82db-484c-93a0-fd9b39b96e37)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8060aebaa14993bbca32c6fd4d64719cf66229be1c4e4908e5327028d6c23f64\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-dcfc9864-vmpn9" podUID="a434ed07-82db-484c-93a0-fd9b39b96e37" Apr 13 19:24:22.640236 containerd[1480]: time="2026-04-13T19:24:22.640023103Z" level=error msg="Failed to destroy network for sandbox \"fbe5471fc3772a45bda34ae4ec8471df711ca31339c01fb06283fdc5eb911f42\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 19:24:22.642041 containerd[1480]: time="2026-04-13T19:24:22.641957645Z" level=error msg="encountered an error cleaning up failed sandbox \"fbe5471fc3772a45bda34ae4ec8471df711ca31339c01fb06283fdc5eb911f42\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 19:24:22.643357 containerd[1480]: time="2026-04-13T19:24:22.642232642Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5d6784dd66-rscc9,Uid:6839a9ef-85b8-4ddf-8641-0352b0c9dd4c,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"fbe5471fc3772a45bda34ae4ec8471df711ca31339c01fb06283fdc5eb911f42\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 19:24:22.643519 kubelet[2577]: E0413 19:24:22.642440 2577 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fbe5471fc3772a45bda34ae4ec8471df711ca31339c01fb06283fdc5eb911f42\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 19:24:22.643519 kubelet[2577]: E0413 19:24:22.642492 2577 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fbe5471fc3772a45bda34ae4ec8471df711ca31339c01fb06283fdc5eb911f42\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5d6784dd66-rscc9" Apr 13 19:24:22.643519 kubelet[2577]: E0413 19:24:22.642510 2577 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fbe5471fc3772a45bda34ae4ec8471df711ca31339c01fb06283fdc5eb911f42\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5d6784dd66-rscc9" Apr 13 19:24:22.643643 kubelet[2577]: E0413 19:24:22.642558 2577 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5d6784dd66-rscc9_calico-system(6839a9ef-85b8-4ddf-8641-0352b0c9dd4c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5d6784dd66-rscc9_calico-system(6839a9ef-85b8-4ddf-8641-0352b0c9dd4c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fbe5471fc3772a45bda34ae4ec8471df711ca31339c01fb06283fdc5eb911f42\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5d6784dd66-rscc9" podUID="6839a9ef-85b8-4ddf-8641-0352b0c9dd4c" Apr 13 19:24:22.652354 containerd[1480]: time="2026-04-13T19:24:22.651669992Z" level=error msg="Failed to destroy network for sandbox \"7b81a5a804ac8f4ba227bb5fc08f214c985a4fc0e785c8474ce0c90ea2cbd6d6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 19:24:22.654996 containerd[1480]: time="2026-04-13T19:24:22.654812882Z" level=error msg="encountered an error cleaning up failed sandbox \"7b81a5a804ac8f4ba227bb5fc08f214c985a4fc0e785c8474ce0c90ea2cbd6d6\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 19:24:22.654996 containerd[1480]: time="2026-04-13T19:24:22.654899121Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-c7f65,Uid:a4204cf2-a1a4-42c4-99b7-217ce68ed464,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"7b81a5a804ac8f4ba227bb5fc08f214c985a4fc0e785c8474ce0c90ea2cbd6d6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 19:24:22.656559 kubelet[2577]: E0413 19:24:22.655557 2577 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7b81a5a804ac8f4ba227bb5fc08f214c985a4fc0e785c8474ce0c90ea2cbd6d6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 19:24:22.656559 kubelet[2577]: E0413 19:24:22.655635 2577 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7b81a5a804ac8f4ba227bb5fc08f214c985a4fc0e785c8474ce0c90ea2cbd6d6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7d764666f9-c7f65" Apr 13 19:24:22.656559 kubelet[2577]: E0413 19:24:22.655654 2577 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7b81a5a804ac8f4ba227bb5fc08f214c985a4fc0e785c8474ce0c90ea2cbd6d6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7d764666f9-c7f65" Apr 13 19:24:22.656760 kubelet[2577]: E0413 19:24:22.655738 2577 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7d764666f9-c7f65_kube-system(a4204cf2-a1a4-42c4-99b7-217ce68ed464)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7d764666f9-c7f65_kube-system(a4204cf2-a1a4-42c4-99b7-217ce68ed464)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7b81a5a804ac8f4ba227bb5fc08f214c985a4fc0e785c8474ce0c90ea2cbd6d6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7d764666f9-c7f65" podUID="a4204cf2-a1a4-42c4-99b7-217ce68ed464" Apr 13 19:24:22.659982 kubelet[2577]: I0413 19:24:22.659114 2577 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a5235e47504c513cfa717aa5a80227a4b51127936d8ede6c41a55871982e7b83" Apr 13 19:24:22.661924 containerd[1480]: time="2026-04-13T19:24:22.661880214Z" level=info msg="StopPodSandbox for \"a5235e47504c513cfa717aa5a80227a4b51127936d8ede6c41a55871982e7b83\"" Apr 13 19:24:22.665431 containerd[1480]: time="2026-04-13T19:24:22.664500989Z" level=info msg="Ensure that sandbox a5235e47504c513cfa717aa5a80227a4b51127936d8ede6c41a55871982e7b83 in task-service has been cleanup successfully" Apr 13 19:24:22.666928 kubelet[2577]: I0413 19:24:22.665925 2577 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="561f945af7426abfc16d5ccc5c2a5786df8b22dc85386ae68c08c2b430fa2f77" Apr 13 19:24:22.667073 containerd[1480]: time="2026-04-13T19:24:22.666519370Z" level=info msg="StopPodSandbox for \"561f945af7426abfc16d5ccc5c2a5786df8b22dc85386ae68c08c2b430fa2f77\"" Apr 13 19:24:22.667073 containerd[1480]: time="2026-04-13T19:24:22.666720608Z" level=info msg="Ensure that sandbox 561f945af7426abfc16d5ccc5c2a5786df8b22dc85386ae68c08c2b430fa2f77 in task-service has been cleanup successfully" Apr 13 19:24:22.675619 kubelet[2577]: I0413 19:24:22.674912 2577 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8060aebaa14993bbca32c6fd4d64719cf66229be1c4e4908e5327028d6c23f64" Apr 13 19:24:22.678244 containerd[1480]: time="2026-04-13T19:24:22.678193899Z" level=info msg="CreateContainer within sandbox \"17630215ad2191f36d7aa2b017754a09e69113bca47cf0c2d821255fea931f23\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Apr 13 19:24:22.680446 containerd[1480]: time="2026-04-13T19:24:22.678675614Z" level=info msg="StopPodSandbox for \"8060aebaa14993bbca32c6fd4d64719cf66229be1c4e4908e5327028d6c23f64\"" Apr 13 19:24:22.680446 containerd[1480]: time="2026-04-13T19:24:22.679021011Z" level=info msg="Ensure that sandbox 8060aebaa14993bbca32c6fd4d64719cf66229be1c4e4908e5327028d6c23f64 in task-service has been cleanup successfully" Apr 13 19:24:22.683428 kubelet[2577]: I0413 19:24:22.683157 2577 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7d0f2d09e14532008ee44c4a7320d9bdbb5bd149bb257763a59e759ee437cb98" Apr 13 19:24:22.685035 containerd[1480]: time="2026-04-13T19:24:22.684816075Z" level=info msg="StopPodSandbox for \"7d0f2d09e14532008ee44c4a7320d9bdbb5bd149bb257763a59e759ee437cb98\"" Apr 13 19:24:22.686475 containerd[1480]: time="2026-04-13T19:24:22.685973344Z" level=info msg="Ensure that sandbox 7d0f2d09e14532008ee44c4a7320d9bdbb5bd149bb257763a59e759ee437cb98 in task-service has been cleanup successfully" Apr 13 19:24:22.701238 containerd[1480]: time="2026-04-13T19:24:22.700505205Z" level=error msg="Failed to destroy network for sandbox \"69c136b4b471d49f60e5e8165d7e298a00591f3210c700c53fd9c0c7efd6ee86\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 19:24:22.702686 containerd[1480]: time="2026-04-13T19:24:22.702637985Z" level=error msg="encountered an error cleaning up failed sandbox \"69c136b4b471d49f60e5e8165d7e298a00591f3210c700c53fd9c0c7efd6ee86\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 19:24:22.704557 containerd[1480]: time="2026-04-13T19:24:22.704503087Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-dcfc9864-8nhqr,Uid:b2a4adae-1965-4b2d-8d19-84656dfbab7d,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"69c136b4b471d49f60e5e8165d7e298a00591f3210c700c53fd9c0c7efd6ee86\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 19:24:22.705026 kubelet[2577]: E0413 19:24:22.704986 2577 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"69c136b4b471d49f60e5e8165d7e298a00591f3210c700c53fd9c0c7efd6ee86\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 19:24:22.705257 kubelet[2577]: E0413 19:24:22.705090 2577 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"69c136b4b471d49f60e5e8165d7e298a00591f3210c700c53fd9c0c7efd6ee86\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-dcfc9864-8nhqr" Apr 13 19:24:22.705257 kubelet[2577]: E0413 19:24:22.705221 2577 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"69c136b4b471d49f60e5e8165d7e298a00591f3210c700c53fd9c0c7efd6ee86\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-dcfc9864-8nhqr" Apr 13 19:24:22.706719 kubelet[2577]: E0413 19:24:22.705319 2577 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-dcfc9864-8nhqr_calico-system(b2a4adae-1965-4b2d-8d19-84656dfbab7d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-dcfc9864-8nhqr_calico-system(b2a4adae-1965-4b2d-8d19-84656dfbab7d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"69c136b4b471d49f60e5e8165d7e298a00591f3210c700c53fd9c0c7efd6ee86\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-dcfc9864-8nhqr" podUID="b2a4adae-1965-4b2d-8d19-84656dfbab7d" Apr 13 19:24:22.717723 containerd[1480]: time="2026-04-13T19:24:22.717677241Z" level=info msg="CreateContainer within sandbox \"17630215ad2191f36d7aa2b017754a09e69113bca47cf0c2d821255fea931f23\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"341c818dd88e66c265cd9ab267c89b8ba665336c5f5817a19dfd70dfac5db21c\"" Apr 13 19:24:22.722025 containerd[1480]: time="2026-04-13T19:24:22.721372406Z" level=info msg="StartContainer for \"341c818dd88e66c265cd9ab267c89b8ba665336c5f5817a19dfd70dfac5db21c\"" Apr 13 19:24:22.744099 containerd[1480]: time="2026-04-13T19:24:22.744050829Z" level=error msg="StopPodSandbox for \"561f945af7426abfc16d5ccc5c2a5786df8b22dc85386ae68c08c2b430fa2f77\" failed" error="failed to destroy network for sandbox \"561f945af7426abfc16d5ccc5c2a5786df8b22dc85386ae68c08c2b430fa2f77\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 19:24:22.744786 kubelet[2577]: E0413 19:24:22.744729 2577 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"561f945af7426abfc16d5ccc5c2a5786df8b22dc85386ae68c08c2b430fa2f77\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="561f945af7426abfc16d5ccc5c2a5786df8b22dc85386ae68c08c2b430fa2f77" Apr 13 19:24:22.744887 kubelet[2577]: E0413 19:24:22.744806 2577 kuberuntime_manager.go:1881] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"561f945af7426abfc16d5ccc5c2a5786df8b22dc85386ae68c08c2b430fa2f77"} Apr 13 19:24:22.744887 kubelet[2577]: E0413 19:24:22.744868 2577 kuberuntime_manager.go:1422] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"69279b5d-27a1-4477-a9f1-aa1c68505c9c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"561f945af7426abfc16d5ccc5c2a5786df8b22dc85386ae68c08c2b430fa2f77\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 13 19:24:22.744998 kubelet[2577]: E0413 19:24:22.744902 2577 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"69279b5d-27a1-4477-a9f1-aa1c68505c9c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"561f945af7426abfc16d5ccc5c2a5786df8b22dc85386ae68c08c2b430fa2f77\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7d764666f9-ngg5x" podUID="69279b5d-27a1-4477-a9f1-aa1c68505c9c" Apr 13 19:24:22.774005 containerd[1480]: time="2026-04-13T19:24:22.773943984Z" level=error msg="StopPodSandbox for \"a5235e47504c513cfa717aa5a80227a4b51127936d8ede6c41a55871982e7b83\" failed" error="failed to destroy network for sandbox \"a5235e47504c513cfa717aa5a80227a4b51127936d8ede6c41a55871982e7b83\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 19:24:22.774232 kubelet[2577]: E0413 19:24:22.774191 2577 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a5235e47504c513cfa717aa5a80227a4b51127936d8ede6c41a55871982e7b83\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a5235e47504c513cfa717aa5a80227a4b51127936d8ede6c41a55871982e7b83" Apr 13 19:24:22.774280 kubelet[2577]: E0413 19:24:22.774246 2577 kuberuntime_manager.go:1881] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a5235e47504c513cfa717aa5a80227a4b51127936d8ede6c41a55871982e7b83"} Apr 13 19:24:22.774314 kubelet[2577]: E0413 19:24:22.774277 2577 kuberuntime_manager.go:1422] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"46a789ab-50b2-4393-9d38-e8e911daa169\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a5235e47504c513cfa717aa5a80227a4b51127936d8ede6c41a55871982e7b83\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 13 19:24:22.774366 kubelet[2577]: E0413 19:24:22.774307 2577 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"46a789ab-50b2-4393-9d38-e8e911daa169\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a5235e47504c513cfa717aa5a80227a4b51127936d8ede6c41a55871982e7b83\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-9f7667bb8-224z2" podUID="46a789ab-50b2-4393-9d38-e8e911daa169" Apr 13 19:24:22.787799 containerd[1480]: time="2026-04-13T19:24:22.787710932Z" level=error msg="StopPodSandbox for \"7d0f2d09e14532008ee44c4a7320d9bdbb5bd149bb257763a59e759ee437cb98\" failed" error="failed to destroy network for sandbox \"7d0f2d09e14532008ee44c4a7320d9bdbb5bd149bb257763a59e759ee437cb98\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 19:24:22.788399 kubelet[2577]: E0413 19:24:22.788336 2577 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"7d0f2d09e14532008ee44c4a7320d9bdbb5bd149bb257763a59e759ee437cb98\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="7d0f2d09e14532008ee44c4a7320d9bdbb5bd149bb257763a59e759ee437cb98" Apr 13 19:24:22.789210 kubelet[2577]: E0413 19:24:22.789167 2577 kuberuntime_manager.go:1881] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"7d0f2d09e14532008ee44c4a7320d9bdbb5bd149bb257763a59e759ee437cb98"} Apr 13 19:24:22.789556 kubelet[2577]: E0413 19:24:22.789527 2577 kuberuntime_manager.go:1422] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c4e502e5-dd94-49d8-bd2f-926114e20b59\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7d0f2d09e14532008ee44c4a7320d9bdbb5bd149bb257763a59e759ee437cb98\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 13 19:24:22.790336 kubelet[2577]: E0413 19:24:22.790294 2577 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c4e502e5-dd94-49d8-bd2f-926114e20b59\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7d0f2d09e14532008ee44c4a7320d9bdbb5bd149bb257763a59e759ee437cb98\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-7cd4c54cd5-vb8pw" podUID="c4e502e5-dd94-49d8-bd2f-926114e20b59" Apr 13 19:24:22.796676 systemd[1]: Started cri-containerd-341c818dd88e66c265cd9ab267c89b8ba665336c5f5817a19dfd70dfac5db21c.scope - libcontainer container 341c818dd88e66c265cd9ab267c89b8ba665336c5f5817a19dfd70dfac5db21c. Apr 13 19:24:22.798986 containerd[1480]: time="2026-04-13T19:24:22.798908305Z" level=error msg="StopPodSandbox for \"8060aebaa14993bbca32c6fd4d64719cf66229be1c4e4908e5327028d6c23f64\" failed" error="failed to destroy network for sandbox \"8060aebaa14993bbca32c6fd4d64719cf66229be1c4e4908e5327028d6c23f64\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 19:24:22.799483 kubelet[2577]: E0413 19:24:22.799228 2577 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"8060aebaa14993bbca32c6fd4d64719cf66229be1c4e4908e5327028d6c23f64\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="8060aebaa14993bbca32c6fd4d64719cf66229be1c4e4908e5327028d6c23f64" Apr 13 19:24:22.799483 kubelet[2577]: E0413 19:24:22.799296 2577 kuberuntime_manager.go:1881] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"8060aebaa14993bbca32c6fd4d64719cf66229be1c4e4908e5327028d6c23f64"} Apr 13 19:24:22.799483 kubelet[2577]: E0413 19:24:22.799330 2577 kuberuntime_manager.go:1422] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a434ed07-82db-484c-93a0-fd9b39b96e37\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8060aebaa14993bbca32c6fd4d64719cf66229be1c4e4908e5327028d6c23f64\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 13 19:24:22.799483 kubelet[2577]: E0413 19:24:22.799374 2577 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a434ed07-82db-484c-93a0-fd9b39b96e37\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8060aebaa14993bbca32c6fd4d64719cf66229be1c4e4908e5327028d6c23f64\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-dcfc9864-vmpn9" podUID="a434ed07-82db-484c-93a0-fd9b39b96e37" Apr 13 19:24:22.842631 containerd[1480]: time="2026-04-13T19:24:22.842531488Z" level=info msg="StartContainer for \"341c818dd88e66c265cd9ab267c89b8ba665336c5f5817a19dfd70dfac5db21c\" returns successfully" Apr 13 19:24:23.478475 systemd[1]: Created slice kubepods-besteffort-podb7e65d94_d910_4098_accc_faa807a02ba1.slice - libcontainer container kubepods-besteffort-podb7e65d94_d910_4098_accc_faa807a02ba1.slice. Apr 13 19:24:23.483804 containerd[1480]: time="2026-04-13T19:24:23.483721570Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-b2r6n,Uid:b7e65d94-d910-4098-accc-faa807a02ba1,Namespace:calico-system,Attempt:0,}" Apr 13 19:24:23.669872 systemd-networkd[1371]: calic0ab767e582: Link UP Apr 13 19:24:23.672316 systemd-networkd[1371]: calic0ab767e582: Gained carrier Apr 13 19:24:23.690355 kubelet[2577]: I0413 19:24:23.687981 2577 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7b81a5a804ac8f4ba227bb5fc08f214c985a4fc0e785c8474ce0c90ea2cbd6d6" Apr 13 19:24:23.690748 containerd[1480]: time="2026-04-13T19:24:23.688731973Z" level=info msg="StopPodSandbox for \"7b81a5a804ac8f4ba227bb5fc08f214c985a4fc0e785c8474ce0c90ea2cbd6d6\"" Apr 13 19:24:23.690748 containerd[1480]: time="2026-04-13T19:24:23.688912532Z" level=info msg="Ensure that sandbox 7b81a5a804ac8f4ba227bb5fc08f214c985a4fc0e785c8474ce0c90ea2cbd6d6 in task-service has been cleanup successfully" Apr 13 19:24:23.711010 containerd[1480]: 2026-04-13 19:24:23.522 [ERROR][3707] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 13 19:24:23.711010 containerd[1480]: 2026-04-13 19:24:23.547 [INFO][3707] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--7--e--ee64700b2a-k8s-csi--node--driver--b2r6n-eth0 csi-node-driver- calico-system b7e65d94-d910-4098-accc-faa807a02ba1 700 0 2026-04-13 19:24:07 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:589b8b8d94 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081-3-7-e-ee64700b2a csi-node-driver-b2r6n eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calic0ab767e582 [] [] }} ContainerID="69528ceb434765d8dfff734419795545fdfcbd5681609795637729b6fd57ebe1" Namespace="calico-system" Pod="csi-node-driver-b2r6n" WorkloadEndpoint="ci--4081--3--7--e--ee64700b2a-k8s-csi--node--driver--b2r6n-" Apr 13 19:24:23.711010 containerd[1480]: 2026-04-13 19:24:23.547 [INFO][3707] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="69528ceb434765d8dfff734419795545fdfcbd5681609795637729b6fd57ebe1" Namespace="calico-system" Pod="csi-node-driver-b2r6n" WorkloadEndpoint="ci--4081--3--7--e--ee64700b2a-k8s-csi--node--driver--b2r6n-eth0" Apr 13 19:24:23.711010 containerd[1480]: 2026-04-13 19:24:23.597 [INFO][3719] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="69528ceb434765d8dfff734419795545fdfcbd5681609795637729b6fd57ebe1" HandleID="k8s-pod-network.69528ceb434765d8dfff734419795545fdfcbd5681609795637729b6fd57ebe1" Workload="ci--4081--3--7--e--ee64700b2a-k8s-csi--node--driver--b2r6n-eth0" Apr 13 19:24:23.711010 containerd[1480]: 2026-04-13 19:24:23.611 [INFO][3719] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="69528ceb434765d8dfff734419795545fdfcbd5681609795637729b6fd57ebe1" HandleID="k8s-pod-network.69528ceb434765d8dfff734419795545fdfcbd5681609795637729b6fd57ebe1" Workload="ci--4081--3--7--e--ee64700b2a-k8s-csi--node--driver--b2r6n-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002fbe80), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-7-e-ee64700b2a", "pod":"csi-node-driver-b2r6n", "timestamp":"2026-04-13 19:24:23.597978146 +0000 UTC"}, Hostname:"ci-4081-3-7-e-ee64700b2a", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40001866e0)} Apr 13 19:24:23.711010 containerd[1480]: 2026-04-13 19:24:23.611 [INFO][3719] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 19:24:23.711010 containerd[1480]: 2026-04-13 19:24:23.611 [INFO][3719] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 19:24:23.711010 containerd[1480]: 2026-04-13 19:24:23.611 [INFO][3719] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-7-e-ee64700b2a' Apr 13 19:24:23.711010 containerd[1480]: 2026-04-13 19:24:23.615 [INFO][3719] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.69528ceb434765d8dfff734419795545fdfcbd5681609795637729b6fd57ebe1" host="ci-4081-3-7-e-ee64700b2a" Apr 13 19:24:23.711010 containerd[1480]: 2026-04-13 19:24:23.621 [INFO][3719] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081-3-7-e-ee64700b2a" Apr 13 19:24:23.711010 containerd[1480]: 2026-04-13 19:24:23.627 [INFO][3719] ipam/ipam.go 526: Trying affinity for 192.168.126.64/26 host="ci-4081-3-7-e-ee64700b2a" Apr 13 19:24:23.711010 containerd[1480]: 2026-04-13 19:24:23.630 [INFO][3719] ipam/ipam.go 160: Attempting to load block cidr=192.168.126.64/26 host="ci-4081-3-7-e-ee64700b2a" Apr 13 19:24:23.711010 containerd[1480]: 2026-04-13 19:24:23.633 [INFO][3719] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.126.64/26 host="ci-4081-3-7-e-ee64700b2a" Apr 13 19:24:23.711010 containerd[1480]: 2026-04-13 19:24:23.633 [INFO][3719] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.126.64/26 handle="k8s-pod-network.69528ceb434765d8dfff734419795545fdfcbd5681609795637729b6fd57ebe1" host="ci-4081-3-7-e-ee64700b2a" Apr 13 19:24:23.711010 containerd[1480]: 2026-04-13 19:24:23.635 [INFO][3719] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.69528ceb434765d8dfff734419795545fdfcbd5681609795637729b6fd57ebe1 Apr 13 19:24:23.711010 containerd[1480]: 2026-04-13 19:24:23.641 [INFO][3719] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.126.64/26 handle="k8s-pod-network.69528ceb434765d8dfff734419795545fdfcbd5681609795637729b6fd57ebe1" host="ci-4081-3-7-e-ee64700b2a" Apr 13 19:24:23.711010 containerd[1480]: 2026-04-13 19:24:23.649 [INFO][3719] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.126.65/26] block=192.168.126.64/26 handle="k8s-pod-network.69528ceb434765d8dfff734419795545fdfcbd5681609795637729b6fd57ebe1" host="ci-4081-3-7-e-ee64700b2a" Apr 13 19:24:23.711010 containerd[1480]: 2026-04-13 19:24:23.649 [INFO][3719] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.126.65/26] handle="k8s-pod-network.69528ceb434765d8dfff734419795545fdfcbd5681609795637729b6fd57ebe1" host="ci-4081-3-7-e-ee64700b2a" Apr 13 19:24:23.711010 containerd[1480]: 2026-04-13 19:24:23.649 [INFO][3719] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 19:24:23.711010 containerd[1480]: 2026-04-13 19:24:23.649 [INFO][3719] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.126.65/26] IPv6=[] ContainerID="69528ceb434765d8dfff734419795545fdfcbd5681609795637729b6fd57ebe1" HandleID="k8s-pod-network.69528ceb434765d8dfff734419795545fdfcbd5681609795637729b6fd57ebe1" Workload="ci--4081--3--7--e--ee64700b2a-k8s-csi--node--driver--b2r6n-eth0" Apr 13 19:24:23.712064 containerd[1480]: 2026-04-13 19:24:23.655 [INFO][3707] cni-plugin/k8s.go 418: Populated endpoint ContainerID="69528ceb434765d8dfff734419795545fdfcbd5681609795637729b6fd57ebe1" Namespace="calico-system" Pod="csi-node-driver-b2r6n" WorkloadEndpoint="ci--4081--3--7--e--ee64700b2a-k8s-csi--node--driver--b2r6n-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--e--ee64700b2a-k8s-csi--node--driver--b2r6n-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b7e65d94-d910-4098-accc-faa807a02ba1", ResourceVersion:"700", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 19, 24, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"589b8b8d94", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-e-ee64700b2a", ContainerID:"", Pod:"csi-node-driver-b2r6n", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.126.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calic0ab767e582", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 19:24:23.712064 containerd[1480]: 2026-04-13 19:24:23.655 [INFO][3707] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.126.65/32] ContainerID="69528ceb434765d8dfff734419795545fdfcbd5681609795637729b6fd57ebe1" Namespace="calico-system" Pod="csi-node-driver-b2r6n" WorkloadEndpoint="ci--4081--3--7--e--ee64700b2a-k8s-csi--node--driver--b2r6n-eth0" Apr 13 19:24:23.712064 containerd[1480]: 2026-04-13 19:24:23.656 [INFO][3707] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic0ab767e582 ContainerID="69528ceb434765d8dfff734419795545fdfcbd5681609795637729b6fd57ebe1" Namespace="calico-system" Pod="csi-node-driver-b2r6n" WorkloadEndpoint="ci--4081--3--7--e--ee64700b2a-k8s-csi--node--driver--b2r6n-eth0" Apr 13 19:24:23.712064 containerd[1480]: 2026-04-13 19:24:23.672 [INFO][3707] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="69528ceb434765d8dfff734419795545fdfcbd5681609795637729b6fd57ebe1" Namespace="calico-system" Pod="csi-node-driver-b2r6n" WorkloadEndpoint="ci--4081--3--7--e--ee64700b2a-k8s-csi--node--driver--b2r6n-eth0" Apr 13 19:24:23.712064 containerd[1480]: 2026-04-13 19:24:23.676 [INFO][3707] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="69528ceb434765d8dfff734419795545fdfcbd5681609795637729b6fd57ebe1" Namespace="calico-system" Pod="csi-node-driver-b2r6n" WorkloadEndpoint="ci--4081--3--7--e--ee64700b2a-k8s-csi--node--driver--b2r6n-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--e--ee64700b2a-k8s-csi--node--driver--b2r6n-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b7e65d94-d910-4098-accc-faa807a02ba1", ResourceVersion:"700", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 19, 24, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"589b8b8d94", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-e-ee64700b2a", ContainerID:"69528ceb434765d8dfff734419795545fdfcbd5681609795637729b6fd57ebe1", Pod:"csi-node-driver-b2r6n", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.126.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calic0ab767e582", MAC:"f2:99:86:29:08:72", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 19:24:23.712064 containerd[1480]: 2026-04-13 19:24:23.694 [INFO][3707] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="69528ceb434765d8dfff734419795545fdfcbd5681609795637729b6fd57ebe1" Namespace="calico-system" Pod="csi-node-driver-b2r6n" WorkloadEndpoint="ci--4081--3--7--e--ee64700b2a-k8s-csi--node--driver--b2r6n-eth0" Apr 13 19:24:23.716608 kubelet[2577]: I0413 19:24:23.715844 2577 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="69c136b4b471d49f60e5e8165d7e298a00591f3210c700c53fd9c0c7efd6ee86" Apr 13 19:24:23.719534 containerd[1480]: time="2026-04-13T19:24:23.719381179Z" level=info msg="StopPodSandbox for \"69c136b4b471d49f60e5e8165d7e298a00591f3210c700c53fd9c0c7efd6ee86\"" Apr 13 19:24:23.720060 containerd[1480]: time="2026-04-13T19:24:23.720028853Z" level=info msg="Ensure that sandbox 69c136b4b471d49f60e5e8165d7e298a00591f3210c700c53fd9c0c7efd6ee86 in task-service has been cleanup successfully" Apr 13 19:24:23.732120 kubelet[2577]: I0413 19:24:23.731905 2577 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fbe5471fc3772a45bda34ae4ec8471df711ca31339c01fb06283fdc5eb911f42" Apr 13 19:24:23.736321 containerd[1480]: time="2026-04-13T19:24:23.736164829Z" level=info msg="StopPodSandbox for \"7d0f2d09e14532008ee44c4a7320d9bdbb5bd149bb257763a59e759ee437cb98\"" Apr 13 19:24:23.736760 containerd[1480]: time="2026-04-13T19:24:23.736485226Z" level=info msg="StopPodSandbox for \"fbe5471fc3772a45bda34ae4ec8471df711ca31339c01fb06283fdc5eb911f42\"" Apr 13 19:24:23.736760 containerd[1480]: time="2026-04-13T19:24:23.736709664Z" level=info msg="Ensure that sandbox fbe5471fc3772a45bda34ae4ec8471df711ca31339c01fb06283fdc5eb911f42 in task-service has been cleanup successfully" Apr 13 19:24:23.755826 kubelet[2577]: I0413 19:24:23.755757 2577 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-node-mb6t5" podStartSLOduration=2.041819197 podStartE2EDuration="16.755741733s" podCreationTimestamp="2026-04-13 19:24:07 +0000 UTC" firstStartedPulling="2026-04-13 19:24:07.934380168 +0000 UTC m=+23.607863604" lastFinishedPulling="2026-04-13 19:24:22.648302704 +0000 UTC m=+38.321786140" observedRunningTime="2026-04-13 19:24:23.755239418 +0000 UTC m=+39.428722854" watchObservedRunningTime="2026-04-13 19:24:23.755741733 +0000 UTC m=+39.429225169" Apr 13 19:24:23.789435 containerd[1480]: time="2026-04-13T19:24:23.789270713Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 19:24:23.789435 containerd[1480]: time="2026-04-13T19:24:23.789345632Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 19:24:23.789435 containerd[1480]: time="2026-04-13T19:24:23.789361992Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:24:23.789650 containerd[1480]: time="2026-04-13T19:24:23.789481711Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:24:23.860685 systemd[1]: Started cri-containerd-69528ceb434765d8dfff734419795545fdfcbd5681609795637729b6fd57ebe1.scope - libcontainer container 69528ceb434765d8dfff734419795545fdfcbd5681609795637729b6fd57ebe1. Apr 13 19:24:23.943251 containerd[1480]: time="2026-04-13T19:24:23.943105415Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-b2r6n,Uid:b7e65d94-d910-4098-accc-faa807a02ba1,Namespace:calico-system,Attempt:0,} returns sandbox id \"69528ceb434765d8dfff734419795545fdfcbd5681609795637729b6fd57ebe1\"" Apr 13 19:24:23.949774 containerd[1480]: time="2026-04-13T19:24:23.949682836Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\"" Apr 13 19:24:24.008663 containerd[1480]: 2026-04-13 19:24:23.856 [INFO][3743] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="7b81a5a804ac8f4ba227bb5fc08f214c985a4fc0e785c8474ce0c90ea2cbd6d6" Apr 13 19:24:24.008663 containerd[1480]: 2026-04-13 19:24:23.856 [INFO][3743] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="7b81a5a804ac8f4ba227bb5fc08f214c985a4fc0e785c8474ce0c90ea2cbd6d6" iface="eth0" netns="/var/run/netns/cni-c15769ee-9cbd-eaa6-de63-e457230efb07" Apr 13 19:24:24.008663 containerd[1480]: 2026-04-13 19:24:23.858 [INFO][3743] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="7b81a5a804ac8f4ba227bb5fc08f214c985a4fc0e785c8474ce0c90ea2cbd6d6" iface="eth0" netns="/var/run/netns/cni-c15769ee-9cbd-eaa6-de63-e457230efb07" Apr 13 19:24:24.008663 containerd[1480]: 2026-04-13 19:24:23.862 [INFO][3743] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="7b81a5a804ac8f4ba227bb5fc08f214c985a4fc0e785c8474ce0c90ea2cbd6d6" iface="eth0" netns="/var/run/netns/cni-c15769ee-9cbd-eaa6-de63-e457230efb07" Apr 13 19:24:24.008663 containerd[1480]: 2026-04-13 19:24:23.862 [INFO][3743] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="7b81a5a804ac8f4ba227bb5fc08f214c985a4fc0e785c8474ce0c90ea2cbd6d6" Apr 13 19:24:24.008663 containerd[1480]: 2026-04-13 19:24:23.862 [INFO][3743] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="7b81a5a804ac8f4ba227bb5fc08f214c985a4fc0e785c8474ce0c90ea2cbd6d6" Apr 13 19:24:24.008663 containerd[1480]: 2026-04-13 19:24:23.958 [INFO][3820] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="7b81a5a804ac8f4ba227bb5fc08f214c985a4fc0e785c8474ce0c90ea2cbd6d6" HandleID="k8s-pod-network.7b81a5a804ac8f4ba227bb5fc08f214c985a4fc0e785c8474ce0c90ea2cbd6d6" Workload="ci--4081--3--7--e--ee64700b2a-k8s-coredns--7d764666f9--c7f65-eth0" Apr 13 19:24:24.008663 containerd[1480]: 2026-04-13 19:24:23.959 [INFO][3820] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 19:24:24.008663 containerd[1480]: 2026-04-13 19:24:23.960 [INFO][3820] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 19:24:24.008663 containerd[1480]: 2026-04-13 19:24:23.994 [WARNING][3820] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="7b81a5a804ac8f4ba227bb5fc08f214c985a4fc0e785c8474ce0c90ea2cbd6d6" HandleID="k8s-pod-network.7b81a5a804ac8f4ba227bb5fc08f214c985a4fc0e785c8474ce0c90ea2cbd6d6" Workload="ci--4081--3--7--e--ee64700b2a-k8s-coredns--7d764666f9--c7f65-eth0" Apr 13 19:24:24.008663 containerd[1480]: 2026-04-13 19:24:23.994 [INFO][3820] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="7b81a5a804ac8f4ba227bb5fc08f214c985a4fc0e785c8474ce0c90ea2cbd6d6" HandleID="k8s-pod-network.7b81a5a804ac8f4ba227bb5fc08f214c985a4fc0e785c8474ce0c90ea2cbd6d6" Workload="ci--4081--3--7--e--ee64700b2a-k8s-coredns--7d764666f9--c7f65-eth0" Apr 13 19:24:24.008663 containerd[1480]: 2026-04-13 19:24:23.998 [INFO][3820] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 19:24:24.008663 containerd[1480]: 2026-04-13 19:24:24.004 [INFO][3743] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="7b81a5a804ac8f4ba227bb5fc08f214c985a4fc0e785c8474ce0c90ea2cbd6d6" Apr 13 19:24:24.010455 containerd[1480]: time="2026-04-13T19:24:24.009997140Z" level=info msg="TearDown network for sandbox \"7b81a5a804ac8f4ba227bb5fc08f214c985a4fc0e785c8474ce0c90ea2cbd6d6\" successfully" Apr 13 19:24:24.010455 containerd[1480]: time="2026-04-13T19:24:24.010040460Z" level=info msg="StopPodSandbox for \"7b81a5a804ac8f4ba227bb5fc08f214c985a4fc0e785c8474ce0c90ea2cbd6d6\" returns successfully" Apr 13 19:24:24.014101 containerd[1480]: time="2026-04-13T19:24:24.014058626Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-c7f65,Uid:a4204cf2-a1a4-42c4-99b7-217ce68ed464,Namespace:kube-system,Attempt:1,}" Apr 13 19:24:24.057932 containerd[1480]: 2026-04-13 19:24:23.906 [INFO][3777] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="69c136b4b471d49f60e5e8165d7e298a00591f3210c700c53fd9c0c7efd6ee86" Apr 13 19:24:24.057932 containerd[1480]: 2026-04-13 19:24:23.907 [INFO][3777] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="69c136b4b471d49f60e5e8165d7e298a00591f3210c700c53fd9c0c7efd6ee86" iface="eth0" netns="/var/run/netns/cni-c572dd29-c35d-130e-4e4c-4023d0052383" Apr 13 19:24:24.057932 containerd[1480]: 2026-04-13 19:24:23.907 [INFO][3777] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="69c136b4b471d49f60e5e8165d7e298a00591f3210c700c53fd9c0c7efd6ee86" iface="eth0" netns="/var/run/netns/cni-c572dd29-c35d-130e-4e4c-4023d0052383" Apr 13 19:24:24.057932 containerd[1480]: 2026-04-13 19:24:23.909 [INFO][3777] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="69c136b4b471d49f60e5e8165d7e298a00591f3210c700c53fd9c0c7efd6ee86" iface="eth0" netns="/var/run/netns/cni-c572dd29-c35d-130e-4e4c-4023d0052383" Apr 13 19:24:24.057932 containerd[1480]: 2026-04-13 19:24:23.909 [INFO][3777] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="69c136b4b471d49f60e5e8165d7e298a00591f3210c700c53fd9c0c7efd6ee86" Apr 13 19:24:24.057932 containerd[1480]: 2026-04-13 19:24:23.909 [INFO][3777] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="69c136b4b471d49f60e5e8165d7e298a00591f3210c700c53fd9c0c7efd6ee86" Apr 13 19:24:24.057932 containerd[1480]: 2026-04-13 19:24:24.025 [INFO][3837] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="69c136b4b471d49f60e5e8165d7e298a00591f3210c700c53fd9c0c7efd6ee86" HandleID="k8s-pod-network.69c136b4b471d49f60e5e8165d7e298a00591f3210c700c53fd9c0c7efd6ee86" Workload="ci--4081--3--7--e--ee64700b2a-k8s-calico--apiserver--dcfc9864--8nhqr-eth0" Apr 13 19:24:24.057932 containerd[1480]: 2026-04-13 19:24:24.025 [INFO][3837] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 19:24:24.057932 containerd[1480]: 2026-04-13 19:24:24.025 [INFO][3837] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 19:24:24.057932 containerd[1480]: 2026-04-13 19:24:24.040 [WARNING][3837] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="69c136b4b471d49f60e5e8165d7e298a00591f3210c700c53fd9c0c7efd6ee86" HandleID="k8s-pod-network.69c136b4b471d49f60e5e8165d7e298a00591f3210c700c53fd9c0c7efd6ee86" Workload="ci--4081--3--7--e--ee64700b2a-k8s-calico--apiserver--dcfc9864--8nhqr-eth0" Apr 13 19:24:24.057932 containerd[1480]: 2026-04-13 19:24:24.040 [INFO][3837] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="69c136b4b471d49f60e5e8165d7e298a00591f3210c700c53fd9c0c7efd6ee86" HandleID="k8s-pod-network.69c136b4b471d49f60e5e8165d7e298a00591f3210c700c53fd9c0c7efd6ee86" Workload="ci--4081--3--7--e--ee64700b2a-k8s-calico--apiserver--dcfc9864--8nhqr-eth0" Apr 13 19:24:24.057932 containerd[1480]: 2026-04-13 19:24:24.045 [INFO][3837] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 19:24:24.057932 containerd[1480]: 2026-04-13 19:24:24.053 [INFO][3777] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="69c136b4b471d49f60e5e8165d7e298a00591f3210c700c53fd9c0c7efd6ee86" Apr 13 19:24:24.057932 containerd[1480]: time="2026-04-13T19:24:24.055390719Z" level=info msg="TearDown network for sandbox \"69c136b4b471d49f60e5e8165d7e298a00591f3210c700c53fd9c0c7efd6ee86\" successfully" Apr 13 19:24:24.057932 containerd[1480]: time="2026-04-13T19:24:24.055449879Z" level=info msg="StopPodSandbox for \"69c136b4b471d49f60e5e8165d7e298a00591f3210c700c53fd9c0c7efd6ee86\" returns successfully" Apr 13 19:24:24.059382 containerd[1480]: time="2026-04-13T19:24:24.058998009Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-dcfc9864-8nhqr,Uid:b2a4adae-1965-4b2d-8d19-84656dfbab7d,Namespace:calico-system,Attempt:1,}" Apr 13 19:24:24.081680 containerd[1480]: 2026-04-13 19:24:23.964 [INFO][3788] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="7d0f2d09e14532008ee44c4a7320d9bdbb5bd149bb257763a59e759ee437cb98" Apr 13 19:24:24.081680 containerd[1480]: 2026-04-13 19:24:23.964 [INFO][3788] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="7d0f2d09e14532008ee44c4a7320d9bdbb5bd149bb257763a59e759ee437cb98" iface="eth0" netns="/var/run/netns/cni-59798f79-8c0e-3e22-5f68-e9c9165291ee" Apr 13 19:24:24.081680 containerd[1480]: 2026-04-13 19:24:23.964 [INFO][3788] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="7d0f2d09e14532008ee44c4a7320d9bdbb5bd149bb257763a59e759ee437cb98" iface="eth0" netns="/var/run/netns/cni-59798f79-8c0e-3e22-5f68-e9c9165291ee" Apr 13 19:24:24.081680 containerd[1480]: 2026-04-13 19:24:23.964 [INFO][3788] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="7d0f2d09e14532008ee44c4a7320d9bdbb5bd149bb257763a59e759ee437cb98" iface="eth0" netns="/var/run/netns/cni-59798f79-8c0e-3e22-5f68-e9c9165291ee" Apr 13 19:24:24.081680 containerd[1480]: 2026-04-13 19:24:23.964 [INFO][3788] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="7d0f2d09e14532008ee44c4a7320d9bdbb5bd149bb257763a59e759ee437cb98" Apr 13 19:24:24.081680 containerd[1480]: 2026-04-13 19:24:23.964 [INFO][3788] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="7d0f2d09e14532008ee44c4a7320d9bdbb5bd149bb257763a59e759ee437cb98" Apr 13 19:24:24.081680 containerd[1480]: 2026-04-13 19:24:24.049 [INFO][3855] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="7d0f2d09e14532008ee44c4a7320d9bdbb5bd149bb257763a59e759ee437cb98" HandleID="k8s-pod-network.7d0f2d09e14532008ee44c4a7320d9bdbb5bd149bb257763a59e759ee437cb98" Workload="ci--4081--3--7--e--ee64700b2a-k8s-whisker--7cd4c54cd5--vb8pw-eth0" Apr 13 19:24:24.081680 containerd[1480]: 2026-04-13 19:24:24.050 [INFO][3855] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 19:24:24.081680 containerd[1480]: 2026-04-13 19:24:24.050 [INFO][3855] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 19:24:24.081680 containerd[1480]: 2026-04-13 19:24:24.071 [WARNING][3855] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="7d0f2d09e14532008ee44c4a7320d9bdbb5bd149bb257763a59e759ee437cb98" HandleID="k8s-pod-network.7d0f2d09e14532008ee44c4a7320d9bdbb5bd149bb257763a59e759ee437cb98" Workload="ci--4081--3--7--e--ee64700b2a-k8s-whisker--7cd4c54cd5--vb8pw-eth0" Apr 13 19:24:24.081680 containerd[1480]: 2026-04-13 19:24:24.071 [INFO][3855] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="7d0f2d09e14532008ee44c4a7320d9bdbb5bd149bb257763a59e759ee437cb98" HandleID="k8s-pod-network.7d0f2d09e14532008ee44c4a7320d9bdbb5bd149bb257763a59e759ee437cb98" Workload="ci--4081--3--7--e--ee64700b2a-k8s-whisker--7cd4c54cd5--vb8pw-eth0" Apr 13 19:24:24.081680 containerd[1480]: 2026-04-13 19:24:24.075 [INFO][3855] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 19:24:24.081680 containerd[1480]: 2026-04-13 19:24:24.079 [INFO][3788] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="7d0f2d09e14532008ee44c4a7320d9bdbb5bd149bb257763a59e759ee437cb98" Apr 13 19:24:24.084378 containerd[1480]: time="2026-04-13T19:24:24.083164446Z" level=info msg="TearDown network for sandbox \"7d0f2d09e14532008ee44c4a7320d9bdbb5bd149bb257763a59e759ee437cb98\" successfully" Apr 13 19:24:24.084378 containerd[1480]: time="2026-04-13T19:24:24.083198486Z" level=info msg="StopPodSandbox for \"7d0f2d09e14532008ee44c4a7320d9bdbb5bd149bb257763a59e759ee437cb98\" returns successfully" Apr 13 19:24:24.103026 containerd[1480]: 2026-04-13 19:24:23.937 [INFO][3785] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="fbe5471fc3772a45bda34ae4ec8471df711ca31339c01fb06283fdc5eb911f42" Apr 13 19:24:24.103026 containerd[1480]: 2026-04-13 19:24:23.939 [INFO][3785] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="fbe5471fc3772a45bda34ae4ec8471df711ca31339c01fb06283fdc5eb911f42" iface="eth0" netns="/var/run/netns/cni-f7b51d4f-6833-f752-9f61-23b228033cd8" Apr 13 19:24:24.103026 containerd[1480]: 2026-04-13 19:24:23.940 [INFO][3785] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="fbe5471fc3772a45bda34ae4ec8471df711ca31339c01fb06283fdc5eb911f42" iface="eth0" netns="/var/run/netns/cni-f7b51d4f-6833-f752-9f61-23b228033cd8" Apr 13 19:24:24.103026 containerd[1480]: 2026-04-13 19:24:23.954 [INFO][3785] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="fbe5471fc3772a45bda34ae4ec8471df711ca31339c01fb06283fdc5eb911f42" iface="eth0" netns="/var/run/netns/cni-f7b51d4f-6833-f752-9f61-23b228033cd8" Apr 13 19:24:24.103026 containerd[1480]: 2026-04-13 19:24:23.954 [INFO][3785] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="fbe5471fc3772a45bda34ae4ec8471df711ca31339c01fb06283fdc5eb911f42" Apr 13 19:24:24.103026 containerd[1480]: 2026-04-13 19:24:23.954 [INFO][3785] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="fbe5471fc3772a45bda34ae4ec8471df711ca31339c01fb06283fdc5eb911f42" Apr 13 19:24:24.103026 containerd[1480]: 2026-04-13 19:24:24.042 [INFO][3850] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="fbe5471fc3772a45bda34ae4ec8471df711ca31339c01fb06283fdc5eb911f42" HandleID="k8s-pod-network.fbe5471fc3772a45bda34ae4ec8471df711ca31339c01fb06283fdc5eb911f42" Workload="ci--4081--3--7--e--ee64700b2a-k8s-calico--kube--controllers--5d6784dd66--rscc9-eth0" Apr 13 19:24:24.103026 containerd[1480]: 2026-04-13 19:24:24.048 [INFO][3850] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 19:24:24.103026 containerd[1480]: 2026-04-13 19:24:24.076 [INFO][3850] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 19:24:24.103026 containerd[1480]: 2026-04-13 19:24:24.094 [WARNING][3850] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="fbe5471fc3772a45bda34ae4ec8471df711ca31339c01fb06283fdc5eb911f42" HandleID="k8s-pod-network.fbe5471fc3772a45bda34ae4ec8471df711ca31339c01fb06283fdc5eb911f42" Workload="ci--4081--3--7--e--ee64700b2a-k8s-calico--kube--controllers--5d6784dd66--rscc9-eth0" Apr 13 19:24:24.103026 containerd[1480]: 2026-04-13 19:24:24.094 [INFO][3850] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="fbe5471fc3772a45bda34ae4ec8471df711ca31339c01fb06283fdc5eb911f42" HandleID="k8s-pod-network.fbe5471fc3772a45bda34ae4ec8471df711ca31339c01fb06283fdc5eb911f42" Workload="ci--4081--3--7--e--ee64700b2a-k8s-calico--kube--controllers--5d6784dd66--rscc9-eth0" Apr 13 19:24:24.103026 containerd[1480]: 2026-04-13 19:24:24.097 [INFO][3850] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 19:24:24.103026 containerd[1480]: 2026-04-13 19:24:24.099 [INFO][3785] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="fbe5471fc3772a45bda34ae4ec8471df711ca31339c01fb06283fdc5eb911f42" Apr 13 19:24:24.105167 containerd[1480]: time="2026-04-13T19:24:24.104318308Z" level=info msg="TearDown network for sandbox \"fbe5471fc3772a45bda34ae4ec8471df711ca31339c01fb06283fdc5eb911f42\" successfully" Apr 13 19:24:24.105167 containerd[1480]: time="2026-04-13T19:24:24.104508107Z" level=info msg="StopPodSandbox for \"fbe5471fc3772a45bda34ae4ec8471df711ca31339c01fb06283fdc5eb911f42\" returns successfully" Apr 13 19:24:24.108895 containerd[1480]: time="2026-04-13T19:24:24.108857150Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5d6784dd66-rscc9,Uid:6839a9ef-85b8-4ddf-8641-0352b0c9dd4c,Namespace:calico-system,Attempt:1,}" Apr 13 19:24:24.160249 kubelet[2577]: I0413 19:24:24.160210 2577 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/projected/c4e502e5-dd94-49d8-bd2f-926114e20b59-kube-api-access-hnjzl\" (UniqueName: \"kubernetes.io/projected/c4e502e5-dd94-49d8-bd2f-926114e20b59-kube-api-access-hnjzl\") pod \"c4e502e5-dd94-49d8-bd2f-926114e20b59\" (UID: \"c4e502e5-dd94-49d8-bd2f-926114e20b59\") " Apr 13 19:24:24.162755 kubelet[2577]: I0413 19:24:24.161405 2577 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/secret/c4e502e5-dd94-49d8-bd2f-926114e20b59-whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/c4e502e5-dd94-49d8-bd2f-926114e20b59-whisker-backend-key-pair\") pod \"c4e502e5-dd94-49d8-bd2f-926114e20b59\" (UID: \"c4e502e5-dd94-49d8-bd2f-926114e20b59\") " Apr 13 19:24:24.162755 kubelet[2577]: I0413 19:24:24.162154 2577 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/configmap/c4e502e5-dd94-49d8-bd2f-926114e20b59-whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c4e502e5-dd94-49d8-bd2f-926114e20b59-whisker-ca-bundle\") pod \"c4e502e5-dd94-49d8-bd2f-926114e20b59\" (UID: \"c4e502e5-dd94-49d8-bd2f-926114e20b59\") " Apr 13 19:24:24.162755 kubelet[2577]: I0413 19:24:24.162179 2577 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/configmap/c4e502e5-dd94-49d8-bd2f-926114e20b59-nginx-config\" (UniqueName: \"kubernetes.io/configmap/c4e502e5-dd94-49d8-bd2f-926114e20b59-nginx-config\") pod \"c4e502e5-dd94-49d8-bd2f-926114e20b59\" (UID: \"c4e502e5-dd94-49d8-bd2f-926114e20b59\") " Apr 13 19:24:24.162755 kubelet[2577]: I0413 19:24:24.162576 2577 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c4e502e5-dd94-49d8-bd2f-926114e20b59-nginx-config" pod "c4e502e5-dd94-49d8-bd2f-926114e20b59" (UID: "c4e502e5-dd94-49d8-bd2f-926114e20b59"). InnerVolumeSpecName "nginx-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 13 19:24:24.163499 kubelet[2577]: I0413 19:24:24.163385 2577 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c4e502e5-dd94-49d8-bd2f-926114e20b59-whisker-ca-bundle" pod "c4e502e5-dd94-49d8-bd2f-926114e20b59" (UID: "c4e502e5-dd94-49d8-bd2f-926114e20b59"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 13 19:24:24.166949 kubelet[2577]: I0413 19:24:24.166877 2577 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c4e502e5-dd94-49d8-bd2f-926114e20b59-kube-api-access-hnjzl" pod "c4e502e5-dd94-49d8-bd2f-926114e20b59" (UID: "c4e502e5-dd94-49d8-bd2f-926114e20b59"). InnerVolumeSpecName "kube-api-access-hnjzl". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 13 19:24:24.170870 kubelet[2577]: I0413 19:24:24.170790 2577 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4e502e5-dd94-49d8-bd2f-926114e20b59-whisker-backend-key-pair" pod "c4e502e5-dd94-49d8-bd2f-926114e20b59" (UID: "c4e502e5-dd94-49d8-bd2f-926114e20b59"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 13 19:24:24.269507 kubelet[2577]: I0413 19:24:24.266275 2577 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c4e502e5-dd94-49d8-bd2f-926114e20b59-whisker-ca-bundle\") on node \"ci-4081-3-7-e-ee64700b2a\" DevicePath \"\"" Apr 13 19:24:24.269507 kubelet[2577]: I0413 19:24:24.266312 2577 reconciler_common.go:299] "Volume detached for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/c4e502e5-dd94-49d8-bd2f-926114e20b59-nginx-config\") on node \"ci-4081-3-7-e-ee64700b2a\" DevicePath \"\"" Apr 13 19:24:24.269507 kubelet[2577]: I0413 19:24:24.266322 2577 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hnjzl\" (UniqueName: \"kubernetes.io/projected/c4e502e5-dd94-49d8-bd2f-926114e20b59-kube-api-access-hnjzl\") on node \"ci-4081-3-7-e-ee64700b2a\" DevicePath \"\"" Apr 13 19:24:24.269507 kubelet[2577]: I0413 19:24:24.266331 2577 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/c4e502e5-dd94-49d8-bd2f-926114e20b59-whisker-backend-key-pair\") on node \"ci-4081-3-7-e-ee64700b2a\" DevicePath \"\"" Apr 13 19:24:24.267922 systemd[1]: run-netns-cni\x2dc15769ee\x2d9cbd\x2deaa6\x2dde63\x2de457230efb07.mount: Deactivated successfully. Apr 13 19:24:24.268001 systemd[1]: run-netns-cni\x2dc572dd29\x2dc35d\x2d130e\x2d4e4c\x2d4023d0052383.mount: Deactivated successfully. Apr 13 19:24:24.268058 systemd[1]: run-netns-cni\x2df7b51d4f\x2d6833\x2df752\x2d9f61\x2d23b228033cd8.mount: Deactivated successfully. Apr 13 19:24:24.268115 systemd[1]: run-netns-cni\x2d59798f79\x2d8c0e\x2d3e22\x2d5f68\x2de9c9165291ee.mount: Deactivated successfully. Apr 13 19:24:24.268168 systemd[1]: var-lib-kubelet-pods-c4e502e5\x2ddd94\x2d49d8\x2dbd2f\x2d926114e20b59-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Apr 13 19:24:24.268219 systemd[1]: var-lib-kubelet-pods-c4e502e5\x2ddd94\x2d49d8\x2dbd2f\x2d926114e20b59-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dhnjzl.mount: Deactivated successfully. Apr 13 19:24:24.290689 systemd-networkd[1371]: cali18050ad82e5: Link UP Apr 13 19:24:24.290906 systemd-networkd[1371]: cali18050ad82e5: Gained carrier Apr 13 19:24:24.308610 containerd[1480]: 2026-04-13 19:24:24.090 [ERROR][3865] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 13 19:24:24.308610 containerd[1480]: 2026-04-13 19:24:24.131 [INFO][3865] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--7--e--ee64700b2a-k8s-coredns--7d764666f9--c7f65-eth0 coredns-7d764666f9- kube-system a4204cf2-a1a4-42c4-99b7-217ce68ed464 883 0 2026-04-13 19:23:50 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7d764666f9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-3-7-e-ee64700b2a coredns-7d764666f9-c7f65 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali18050ad82e5 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="bcc77f05c636f8bc7b233e8a6eb16af9bd0032af641059c2bdc62286dcd4da27" Namespace="kube-system" Pod="coredns-7d764666f9-c7f65" WorkloadEndpoint="ci--4081--3--7--e--ee64700b2a-k8s-coredns--7d764666f9--c7f65-" Apr 13 19:24:24.308610 containerd[1480]: 2026-04-13 19:24:24.131 [INFO][3865] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="bcc77f05c636f8bc7b233e8a6eb16af9bd0032af641059c2bdc62286dcd4da27" Namespace="kube-system" Pod="coredns-7d764666f9-c7f65" WorkloadEndpoint="ci--4081--3--7--e--ee64700b2a-k8s-coredns--7d764666f9--c7f65-eth0" Apr 13 19:24:24.308610 containerd[1480]: 2026-04-13 19:24:24.185 [INFO][3895] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="bcc77f05c636f8bc7b233e8a6eb16af9bd0032af641059c2bdc62286dcd4da27" HandleID="k8s-pod-network.bcc77f05c636f8bc7b233e8a6eb16af9bd0032af641059c2bdc62286dcd4da27" Workload="ci--4081--3--7--e--ee64700b2a-k8s-coredns--7d764666f9--c7f65-eth0" Apr 13 19:24:24.308610 containerd[1480]: 2026-04-13 19:24:24.203 [INFO][3895] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="bcc77f05c636f8bc7b233e8a6eb16af9bd0032af641059c2bdc62286dcd4da27" HandleID="k8s-pod-network.bcc77f05c636f8bc7b233e8a6eb16af9bd0032af641059c2bdc62286dcd4da27" Workload="ci--4081--3--7--e--ee64700b2a-k8s-coredns--7d764666f9--c7f65-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002f9e90), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-3-7-e-ee64700b2a", "pod":"coredns-7d764666f9-c7f65", "timestamp":"2026-04-13 19:24:24.185394907 +0000 UTC"}, Hostname:"ci-4081-3-7-e-ee64700b2a", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40003e1080)} Apr 13 19:24:24.308610 containerd[1480]: 2026-04-13 19:24:24.203 [INFO][3895] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 19:24:24.308610 containerd[1480]: 2026-04-13 19:24:24.203 [INFO][3895] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 19:24:24.308610 containerd[1480]: 2026-04-13 19:24:24.203 [INFO][3895] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-7-e-ee64700b2a' Apr 13 19:24:24.308610 containerd[1480]: 2026-04-13 19:24:24.209 [INFO][3895] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.bcc77f05c636f8bc7b233e8a6eb16af9bd0032af641059c2bdc62286dcd4da27" host="ci-4081-3-7-e-ee64700b2a" Apr 13 19:24:24.308610 containerd[1480]: 2026-04-13 19:24:24.218 [INFO][3895] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081-3-7-e-ee64700b2a" Apr 13 19:24:24.308610 containerd[1480]: 2026-04-13 19:24:24.226 [INFO][3895] ipam/ipam.go 526: Trying affinity for 192.168.126.64/26 host="ci-4081-3-7-e-ee64700b2a" Apr 13 19:24:24.308610 containerd[1480]: 2026-04-13 19:24:24.230 [INFO][3895] ipam/ipam.go 160: Attempting to load block cidr=192.168.126.64/26 host="ci-4081-3-7-e-ee64700b2a" Apr 13 19:24:24.308610 containerd[1480]: 2026-04-13 19:24:24.233 [INFO][3895] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.126.64/26 host="ci-4081-3-7-e-ee64700b2a" Apr 13 19:24:24.308610 containerd[1480]: 2026-04-13 19:24:24.233 [INFO][3895] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.126.64/26 handle="k8s-pod-network.bcc77f05c636f8bc7b233e8a6eb16af9bd0032af641059c2bdc62286dcd4da27" host="ci-4081-3-7-e-ee64700b2a" Apr 13 19:24:24.308610 containerd[1480]: 2026-04-13 19:24:24.236 [INFO][3895] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.bcc77f05c636f8bc7b233e8a6eb16af9bd0032af641059c2bdc62286dcd4da27 Apr 13 19:24:24.308610 containerd[1480]: 2026-04-13 19:24:24.243 [INFO][3895] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.126.64/26 handle="k8s-pod-network.bcc77f05c636f8bc7b233e8a6eb16af9bd0032af641059c2bdc62286dcd4da27" host="ci-4081-3-7-e-ee64700b2a" Apr 13 19:24:24.308610 containerd[1480]: 2026-04-13 19:24:24.265 [INFO][3895] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.126.66/26] block=192.168.126.64/26 handle="k8s-pod-network.bcc77f05c636f8bc7b233e8a6eb16af9bd0032af641059c2bdc62286dcd4da27" host="ci-4081-3-7-e-ee64700b2a" Apr 13 19:24:24.308610 containerd[1480]: 2026-04-13 19:24:24.265 [INFO][3895] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.126.66/26] handle="k8s-pod-network.bcc77f05c636f8bc7b233e8a6eb16af9bd0032af641059c2bdc62286dcd4da27" host="ci-4081-3-7-e-ee64700b2a" Apr 13 19:24:24.308610 containerd[1480]: 2026-04-13 19:24:24.270 [INFO][3895] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 19:24:24.308610 containerd[1480]: 2026-04-13 19:24:24.270 [INFO][3895] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.126.66/26] IPv6=[] ContainerID="bcc77f05c636f8bc7b233e8a6eb16af9bd0032af641059c2bdc62286dcd4da27" HandleID="k8s-pod-network.bcc77f05c636f8bc7b233e8a6eb16af9bd0032af641059c2bdc62286dcd4da27" Workload="ci--4081--3--7--e--ee64700b2a-k8s-coredns--7d764666f9--c7f65-eth0" Apr 13 19:24:24.309124 containerd[1480]: 2026-04-13 19:24:24.280 [INFO][3865] cni-plugin/k8s.go 418: Populated endpoint ContainerID="bcc77f05c636f8bc7b233e8a6eb16af9bd0032af641059c2bdc62286dcd4da27" Namespace="kube-system" Pod="coredns-7d764666f9-c7f65" WorkloadEndpoint="ci--4081--3--7--e--ee64700b2a-k8s-coredns--7d764666f9--c7f65-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--e--ee64700b2a-k8s-coredns--7d764666f9--c7f65-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"a4204cf2-a1a4-42c4-99b7-217ce68ed464", ResourceVersion:"883", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 19, 23, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-e-ee64700b2a", ContainerID:"", Pod:"coredns-7d764666f9-c7f65", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.126.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali18050ad82e5", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 19:24:24.309124 containerd[1480]: 2026-04-13 19:24:24.280 [INFO][3865] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.126.66/32] ContainerID="bcc77f05c636f8bc7b233e8a6eb16af9bd0032af641059c2bdc62286dcd4da27" Namespace="kube-system" Pod="coredns-7d764666f9-c7f65" WorkloadEndpoint="ci--4081--3--7--e--ee64700b2a-k8s-coredns--7d764666f9--c7f65-eth0" Apr 13 19:24:24.309124 containerd[1480]: 2026-04-13 19:24:24.283 [INFO][3865] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali18050ad82e5 ContainerID="bcc77f05c636f8bc7b233e8a6eb16af9bd0032af641059c2bdc62286dcd4da27" Namespace="kube-system" Pod="coredns-7d764666f9-c7f65" WorkloadEndpoint="ci--4081--3--7--e--ee64700b2a-k8s-coredns--7d764666f9--c7f65-eth0" Apr 13 19:24:24.309124 containerd[1480]: 2026-04-13 19:24:24.287 [INFO][3865] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="bcc77f05c636f8bc7b233e8a6eb16af9bd0032af641059c2bdc62286dcd4da27" Namespace="kube-system" Pod="coredns-7d764666f9-c7f65" WorkloadEndpoint="ci--4081--3--7--e--ee64700b2a-k8s-coredns--7d764666f9--c7f65-eth0" Apr 13 19:24:24.309124 containerd[1480]: 2026-04-13 19:24:24.287 [INFO][3865] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="bcc77f05c636f8bc7b233e8a6eb16af9bd0032af641059c2bdc62286dcd4da27" Namespace="kube-system" Pod="coredns-7d764666f9-c7f65" WorkloadEndpoint="ci--4081--3--7--e--ee64700b2a-k8s-coredns--7d764666f9--c7f65-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--e--ee64700b2a-k8s-coredns--7d764666f9--c7f65-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"a4204cf2-a1a4-42c4-99b7-217ce68ed464", ResourceVersion:"883", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 19, 23, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-e-ee64700b2a", ContainerID:"bcc77f05c636f8bc7b233e8a6eb16af9bd0032af641059c2bdc62286dcd4da27", Pod:"coredns-7d764666f9-c7f65", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.126.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali18050ad82e5", MAC:"66:af:35:82:e6:97", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 19:24:24.309349 containerd[1480]: 2026-04-13 19:24:24.302 [INFO][3865] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="bcc77f05c636f8bc7b233e8a6eb16af9bd0032af641059c2bdc62286dcd4da27" Namespace="kube-system" Pod="coredns-7d764666f9-c7f65" WorkloadEndpoint="ci--4081--3--7--e--ee64700b2a-k8s-coredns--7d764666f9--c7f65-eth0" Apr 13 19:24:24.362513 containerd[1480]: time="2026-04-13T19:24:24.362348421Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 19:24:24.363665 containerd[1480]: time="2026-04-13T19:24:24.362537340Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 19:24:24.363665 containerd[1480]: time="2026-04-13T19:24:24.362572100Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:24:24.363665 containerd[1480]: time="2026-04-13T19:24:24.363539971Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:24:24.439071 systemd[1]: Started cri-containerd-bcc77f05c636f8bc7b233e8a6eb16af9bd0032af641059c2bdc62286dcd4da27.scope - libcontainer container bcc77f05c636f8bc7b233e8a6eb16af9bd0032af641059c2bdc62286dcd4da27. Apr 13 19:24:24.440751 systemd-networkd[1371]: cali94161c4eaef: Link UP Apr 13 19:24:24.444852 systemd-networkd[1371]: cali94161c4eaef: Gained carrier Apr 13 19:24:24.492799 systemd[1]: Removed slice kubepods-besteffort-podc4e502e5_dd94_49d8_bd2f_926114e20b59.slice - libcontainer container kubepods-besteffort-podc4e502e5_dd94_49d8_bd2f_926114e20b59.slice. Apr 13 19:24:24.504892 containerd[1480]: 2026-04-13 19:24:24.150 [ERROR][3875] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 13 19:24:24.504892 containerd[1480]: 2026-04-13 19:24:24.179 [INFO][3875] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--7--e--ee64700b2a-k8s-calico--apiserver--dcfc9864--8nhqr-eth0 calico-apiserver-dcfc9864- calico-system b2a4adae-1965-4b2d-8d19-84656dfbab7d 884 0 2026-04-13 19:24:05 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:dcfc9864 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-7-e-ee64700b2a calico-apiserver-dcfc9864-8nhqr eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali94161c4eaef [] [] }} ContainerID="f5929a3d881c479ea7e6c922b34ea382f35151a2dde6f70ae22962d85d5af9c7" Namespace="calico-system" Pod="calico-apiserver-dcfc9864-8nhqr" WorkloadEndpoint="ci--4081--3--7--e--ee64700b2a-k8s-calico--apiserver--dcfc9864--8nhqr-" Apr 13 19:24:24.504892 containerd[1480]: 2026-04-13 19:24:24.179 [INFO][3875] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f5929a3d881c479ea7e6c922b34ea382f35151a2dde6f70ae22962d85d5af9c7" Namespace="calico-system" Pod="calico-apiserver-dcfc9864-8nhqr" WorkloadEndpoint="ci--4081--3--7--e--ee64700b2a-k8s-calico--apiserver--dcfc9864--8nhqr-eth0" Apr 13 19:24:24.504892 containerd[1480]: 2026-04-13 19:24:24.241 [INFO][3909] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f5929a3d881c479ea7e6c922b34ea382f35151a2dde6f70ae22962d85d5af9c7" HandleID="k8s-pod-network.f5929a3d881c479ea7e6c922b34ea382f35151a2dde6f70ae22962d85d5af9c7" Workload="ci--4081--3--7--e--ee64700b2a-k8s-calico--apiserver--dcfc9864--8nhqr-eth0" Apr 13 19:24:24.504892 containerd[1480]: 2026-04-13 19:24:24.282 [INFO][3909] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="f5929a3d881c479ea7e6c922b34ea382f35151a2dde6f70ae22962d85d5af9c7" HandleID="k8s-pod-network.f5929a3d881c479ea7e6c922b34ea382f35151a2dde6f70ae22962d85d5af9c7" Workload="ci--4081--3--7--e--ee64700b2a-k8s-calico--apiserver--dcfc9864--8nhqr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002ebaf0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-7-e-ee64700b2a", "pod":"calico-apiserver-dcfc9864-8nhqr", "timestamp":"2026-04-13 19:24:24.241330358 +0000 UTC"}, Hostname:"ci-4081-3-7-e-ee64700b2a", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40001826e0)} Apr 13 19:24:24.504892 containerd[1480]: 2026-04-13 19:24:24.283 [INFO][3909] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 19:24:24.504892 containerd[1480]: 2026-04-13 19:24:24.283 [INFO][3909] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 19:24:24.504892 containerd[1480]: 2026-04-13 19:24:24.283 [INFO][3909] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-7-e-ee64700b2a' Apr 13 19:24:24.504892 containerd[1480]: 2026-04-13 19:24:24.312 [INFO][3909] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.f5929a3d881c479ea7e6c922b34ea382f35151a2dde6f70ae22962d85d5af9c7" host="ci-4081-3-7-e-ee64700b2a" Apr 13 19:24:24.504892 containerd[1480]: 2026-04-13 19:24:24.330 [INFO][3909] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081-3-7-e-ee64700b2a" Apr 13 19:24:24.504892 containerd[1480]: 2026-04-13 19:24:24.341 [INFO][3909] ipam/ipam.go 526: Trying affinity for 192.168.126.64/26 host="ci-4081-3-7-e-ee64700b2a" Apr 13 19:24:24.504892 containerd[1480]: 2026-04-13 19:24:24.346 [INFO][3909] ipam/ipam.go 160: Attempting to load block cidr=192.168.126.64/26 host="ci-4081-3-7-e-ee64700b2a" Apr 13 19:24:24.504892 containerd[1480]: 2026-04-13 19:24:24.354 [INFO][3909] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.126.64/26 host="ci-4081-3-7-e-ee64700b2a" Apr 13 19:24:24.504892 containerd[1480]: 2026-04-13 19:24:24.354 [INFO][3909] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.126.64/26 handle="k8s-pod-network.f5929a3d881c479ea7e6c922b34ea382f35151a2dde6f70ae22962d85d5af9c7" host="ci-4081-3-7-e-ee64700b2a" Apr 13 19:24:24.504892 containerd[1480]: 2026-04-13 19:24:24.358 [INFO][3909] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.f5929a3d881c479ea7e6c922b34ea382f35151a2dde6f70ae22962d85d5af9c7 Apr 13 19:24:24.504892 containerd[1480]: 2026-04-13 19:24:24.376 [INFO][3909] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.126.64/26 handle="k8s-pod-network.f5929a3d881c479ea7e6c922b34ea382f35151a2dde6f70ae22962d85d5af9c7" host="ci-4081-3-7-e-ee64700b2a" Apr 13 19:24:24.504892 containerd[1480]: 2026-04-13 19:24:24.388 [INFO][3909] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.126.67/26] block=192.168.126.64/26 handle="k8s-pod-network.f5929a3d881c479ea7e6c922b34ea382f35151a2dde6f70ae22962d85d5af9c7" host="ci-4081-3-7-e-ee64700b2a" Apr 13 19:24:24.504892 containerd[1480]: 2026-04-13 19:24:24.389 [INFO][3909] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.126.67/26] handle="k8s-pod-network.f5929a3d881c479ea7e6c922b34ea382f35151a2dde6f70ae22962d85d5af9c7" host="ci-4081-3-7-e-ee64700b2a" Apr 13 19:24:24.504892 containerd[1480]: 2026-04-13 19:24:24.390 [INFO][3909] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 19:24:24.504892 containerd[1480]: 2026-04-13 19:24:24.390 [INFO][3909] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.126.67/26] IPv6=[] ContainerID="f5929a3d881c479ea7e6c922b34ea382f35151a2dde6f70ae22962d85d5af9c7" HandleID="k8s-pod-network.f5929a3d881c479ea7e6c922b34ea382f35151a2dde6f70ae22962d85d5af9c7" Workload="ci--4081--3--7--e--ee64700b2a-k8s-calico--apiserver--dcfc9864--8nhqr-eth0" Apr 13 19:24:24.506961 containerd[1480]: 2026-04-13 19:24:24.402 [INFO][3875] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f5929a3d881c479ea7e6c922b34ea382f35151a2dde6f70ae22962d85d5af9c7" Namespace="calico-system" Pod="calico-apiserver-dcfc9864-8nhqr" WorkloadEndpoint="ci--4081--3--7--e--ee64700b2a-k8s-calico--apiserver--dcfc9864--8nhqr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--e--ee64700b2a-k8s-calico--apiserver--dcfc9864--8nhqr-eth0", GenerateName:"calico-apiserver-dcfc9864-", Namespace:"calico-system", SelfLink:"", UID:"b2a4adae-1965-4b2d-8d19-84656dfbab7d", ResourceVersion:"884", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 19, 24, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"dcfc9864", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-e-ee64700b2a", ContainerID:"", Pod:"calico-apiserver-dcfc9864-8nhqr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.126.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali94161c4eaef", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 19:24:24.506961 containerd[1480]: 2026-04-13 19:24:24.402 [INFO][3875] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.126.67/32] ContainerID="f5929a3d881c479ea7e6c922b34ea382f35151a2dde6f70ae22962d85d5af9c7" Namespace="calico-system" Pod="calico-apiserver-dcfc9864-8nhqr" WorkloadEndpoint="ci--4081--3--7--e--ee64700b2a-k8s-calico--apiserver--dcfc9864--8nhqr-eth0" Apr 13 19:24:24.506961 containerd[1480]: 2026-04-13 19:24:24.402 [INFO][3875] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali94161c4eaef ContainerID="f5929a3d881c479ea7e6c922b34ea382f35151a2dde6f70ae22962d85d5af9c7" Namespace="calico-system" Pod="calico-apiserver-dcfc9864-8nhqr" WorkloadEndpoint="ci--4081--3--7--e--ee64700b2a-k8s-calico--apiserver--dcfc9864--8nhqr-eth0" Apr 13 19:24:24.506961 containerd[1480]: 2026-04-13 19:24:24.451 [INFO][3875] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f5929a3d881c479ea7e6c922b34ea382f35151a2dde6f70ae22962d85d5af9c7" Namespace="calico-system" Pod="calico-apiserver-dcfc9864-8nhqr" WorkloadEndpoint="ci--4081--3--7--e--ee64700b2a-k8s-calico--apiserver--dcfc9864--8nhqr-eth0" Apr 13 19:24:24.506961 containerd[1480]: 2026-04-13 19:24:24.458 [INFO][3875] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f5929a3d881c479ea7e6c922b34ea382f35151a2dde6f70ae22962d85d5af9c7" Namespace="calico-system" Pod="calico-apiserver-dcfc9864-8nhqr" WorkloadEndpoint="ci--4081--3--7--e--ee64700b2a-k8s-calico--apiserver--dcfc9864--8nhqr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--e--ee64700b2a-k8s-calico--apiserver--dcfc9864--8nhqr-eth0", GenerateName:"calico-apiserver-dcfc9864-", Namespace:"calico-system", SelfLink:"", UID:"b2a4adae-1965-4b2d-8d19-84656dfbab7d", ResourceVersion:"884", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 19, 24, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"dcfc9864", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-e-ee64700b2a", ContainerID:"f5929a3d881c479ea7e6c922b34ea382f35151a2dde6f70ae22962d85d5af9c7", Pod:"calico-apiserver-dcfc9864-8nhqr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.126.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali94161c4eaef", MAC:"7e:54:7a:18:72:8c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 19:24:24.506961 containerd[1480]: 2026-04-13 19:24:24.500 [INFO][3875] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f5929a3d881c479ea7e6c922b34ea382f35151a2dde6f70ae22962d85d5af9c7" Namespace="calico-system" Pod="calico-apiserver-dcfc9864-8nhqr" WorkloadEndpoint="ci--4081--3--7--e--ee64700b2a-k8s-calico--apiserver--dcfc9864--8nhqr-eth0" Apr 13 19:24:24.544771 systemd-networkd[1371]: calied041ec83bc: Link UP Apr 13 19:24:24.551120 systemd-networkd[1371]: calied041ec83bc: Gained carrier Apr 13 19:24:24.581935 containerd[1480]: time="2026-04-13T19:24:24.581507941Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-c7f65,Uid:a4204cf2-a1a4-42c4-99b7-217ce68ed464,Namespace:kube-system,Attempt:1,} returns sandbox id \"bcc77f05c636f8bc7b233e8a6eb16af9bd0032af641059c2bdc62286dcd4da27\"" Apr 13 19:24:24.598197 containerd[1480]: 2026-04-13 19:24:24.179 [ERROR][3886] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 13 19:24:24.598197 containerd[1480]: 2026-04-13 19:24:24.200 [INFO][3886] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--7--e--ee64700b2a-k8s-calico--kube--controllers--5d6784dd66--rscc9-eth0 calico-kube-controllers-5d6784dd66- calico-system 6839a9ef-85b8-4ddf-8641-0352b0c9dd4c 885 0 2026-04-13 19:24:07 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:5d6784dd66 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081-3-7-e-ee64700b2a calico-kube-controllers-5d6784dd66-rscc9 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calied041ec83bc [] [] }} ContainerID="aa35a1eec1520c74aa1dd5abc5b0a6ca2d679a42faf0ffc8dc7263c81d93d41e" Namespace="calico-system" Pod="calico-kube-controllers-5d6784dd66-rscc9" WorkloadEndpoint="ci--4081--3--7--e--ee64700b2a-k8s-calico--kube--controllers--5d6784dd66--rscc9-" Apr 13 19:24:24.598197 containerd[1480]: 2026-04-13 19:24:24.201 [INFO][3886] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="aa35a1eec1520c74aa1dd5abc5b0a6ca2d679a42faf0ffc8dc7263c81d93d41e" Namespace="calico-system" Pod="calico-kube-controllers-5d6784dd66-rscc9" WorkloadEndpoint="ci--4081--3--7--e--ee64700b2a-k8s-calico--kube--controllers--5d6784dd66--rscc9-eth0" Apr 13 19:24:24.598197 containerd[1480]: 2026-04-13 19:24:24.258 [INFO][3917] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="aa35a1eec1520c74aa1dd5abc5b0a6ca2d679a42faf0ffc8dc7263c81d93d41e" HandleID="k8s-pod-network.aa35a1eec1520c74aa1dd5abc5b0a6ca2d679a42faf0ffc8dc7263c81d93d41e" Workload="ci--4081--3--7--e--ee64700b2a-k8s-calico--kube--controllers--5d6784dd66--rscc9-eth0" Apr 13 19:24:24.598197 containerd[1480]: 2026-04-13 19:24:24.284 [INFO][3917] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="aa35a1eec1520c74aa1dd5abc5b0a6ca2d679a42faf0ffc8dc7263c81d93d41e" HandleID="k8s-pod-network.aa35a1eec1520c74aa1dd5abc5b0a6ca2d679a42faf0ffc8dc7263c81d93d41e" Workload="ci--4081--3--7--e--ee64700b2a-k8s-calico--kube--controllers--5d6784dd66--rscc9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002fbdc0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-7-e-ee64700b2a", "pod":"calico-kube-controllers-5d6784dd66-rscc9", "timestamp":"2026-04-13 19:24:24.258718332 +0000 UTC"}, Hostname:"ci-4081-3-7-e-ee64700b2a", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40002e5760)} Apr 13 19:24:24.598197 containerd[1480]: 2026-04-13 19:24:24.292 [INFO][3917] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 19:24:24.598197 containerd[1480]: 2026-04-13 19:24:24.390 [INFO][3917] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 19:24:24.598197 containerd[1480]: 2026-04-13 19:24:24.390 [INFO][3917] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-7-e-ee64700b2a' Apr 13 19:24:24.598197 containerd[1480]: 2026-04-13 19:24:24.414 [INFO][3917] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.aa35a1eec1520c74aa1dd5abc5b0a6ca2d679a42faf0ffc8dc7263c81d93d41e" host="ci-4081-3-7-e-ee64700b2a" Apr 13 19:24:24.598197 containerd[1480]: 2026-04-13 19:24:24.446 [INFO][3917] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081-3-7-e-ee64700b2a" Apr 13 19:24:24.598197 containerd[1480]: 2026-04-13 19:24:24.468 [INFO][3917] ipam/ipam.go 526: Trying affinity for 192.168.126.64/26 host="ci-4081-3-7-e-ee64700b2a" Apr 13 19:24:24.598197 containerd[1480]: 2026-04-13 19:24:24.481 [INFO][3917] ipam/ipam.go 160: Attempting to load block cidr=192.168.126.64/26 host="ci-4081-3-7-e-ee64700b2a" Apr 13 19:24:24.598197 containerd[1480]: 2026-04-13 19:24:24.486 [INFO][3917] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.126.64/26 host="ci-4081-3-7-e-ee64700b2a" Apr 13 19:24:24.598197 containerd[1480]: 2026-04-13 19:24:24.487 [INFO][3917] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.126.64/26 handle="k8s-pod-network.aa35a1eec1520c74aa1dd5abc5b0a6ca2d679a42faf0ffc8dc7263c81d93d41e" host="ci-4081-3-7-e-ee64700b2a" Apr 13 19:24:24.598197 containerd[1480]: 2026-04-13 19:24:24.502 [INFO][3917] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.aa35a1eec1520c74aa1dd5abc5b0a6ca2d679a42faf0ffc8dc7263c81d93d41e Apr 13 19:24:24.598197 containerd[1480]: 2026-04-13 19:24:24.511 [INFO][3917] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.126.64/26 handle="k8s-pod-network.aa35a1eec1520c74aa1dd5abc5b0a6ca2d679a42faf0ffc8dc7263c81d93d41e" host="ci-4081-3-7-e-ee64700b2a" Apr 13 19:24:24.598197 containerd[1480]: 2026-04-13 19:24:24.522 [INFO][3917] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.126.68/26] block=192.168.126.64/26 handle="k8s-pod-network.aa35a1eec1520c74aa1dd5abc5b0a6ca2d679a42faf0ffc8dc7263c81d93d41e" host="ci-4081-3-7-e-ee64700b2a" Apr 13 19:24:24.598197 containerd[1480]: 2026-04-13 19:24:24.522 [INFO][3917] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.126.68/26] handle="k8s-pod-network.aa35a1eec1520c74aa1dd5abc5b0a6ca2d679a42faf0ffc8dc7263c81d93d41e" host="ci-4081-3-7-e-ee64700b2a" Apr 13 19:24:24.598197 containerd[1480]: 2026-04-13 19:24:24.523 [INFO][3917] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 19:24:24.598197 containerd[1480]: 2026-04-13 19:24:24.523 [INFO][3917] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.126.68/26] IPv6=[] ContainerID="aa35a1eec1520c74aa1dd5abc5b0a6ca2d679a42faf0ffc8dc7263c81d93d41e" HandleID="k8s-pod-network.aa35a1eec1520c74aa1dd5abc5b0a6ca2d679a42faf0ffc8dc7263c81d93d41e" Workload="ci--4081--3--7--e--ee64700b2a-k8s-calico--kube--controllers--5d6784dd66--rscc9-eth0" Apr 13 19:24:24.598838 containerd[1480]: 2026-04-13 19:24:24.529 [INFO][3886] cni-plugin/k8s.go 418: Populated endpoint ContainerID="aa35a1eec1520c74aa1dd5abc5b0a6ca2d679a42faf0ffc8dc7263c81d93d41e" Namespace="calico-system" Pod="calico-kube-controllers-5d6784dd66-rscc9" WorkloadEndpoint="ci--4081--3--7--e--ee64700b2a-k8s-calico--kube--controllers--5d6784dd66--rscc9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--e--ee64700b2a-k8s-calico--kube--controllers--5d6784dd66--rscc9-eth0", GenerateName:"calico-kube-controllers-5d6784dd66-", Namespace:"calico-system", SelfLink:"", UID:"6839a9ef-85b8-4ddf-8641-0352b0c9dd4c", ResourceVersion:"885", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 19, 24, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5d6784dd66", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-e-ee64700b2a", ContainerID:"", Pod:"calico-kube-controllers-5d6784dd66-rscc9", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.126.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calied041ec83bc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 19:24:24.598838 containerd[1480]: 2026-04-13 19:24:24.529 [INFO][3886] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.126.68/32] ContainerID="aa35a1eec1520c74aa1dd5abc5b0a6ca2d679a42faf0ffc8dc7263c81d93d41e" Namespace="calico-system" Pod="calico-kube-controllers-5d6784dd66-rscc9" WorkloadEndpoint="ci--4081--3--7--e--ee64700b2a-k8s-calico--kube--controllers--5d6784dd66--rscc9-eth0" Apr 13 19:24:24.598838 containerd[1480]: 2026-04-13 19:24:24.529 [INFO][3886] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calied041ec83bc ContainerID="aa35a1eec1520c74aa1dd5abc5b0a6ca2d679a42faf0ffc8dc7263c81d93d41e" Namespace="calico-system" Pod="calico-kube-controllers-5d6784dd66-rscc9" WorkloadEndpoint="ci--4081--3--7--e--ee64700b2a-k8s-calico--kube--controllers--5d6784dd66--rscc9-eth0" Apr 13 19:24:24.598838 containerd[1480]: 2026-04-13 19:24:24.551 [INFO][3886] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="aa35a1eec1520c74aa1dd5abc5b0a6ca2d679a42faf0ffc8dc7263c81d93d41e" Namespace="calico-system" Pod="calico-kube-controllers-5d6784dd66-rscc9" WorkloadEndpoint="ci--4081--3--7--e--ee64700b2a-k8s-calico--kube--controllers--5d6784dd66--rscc9-eth0" Apr 13 19:24:24.598838 containerd[1480]: 2026-04-13 19:24:24.552 [INFO][3886] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="aa35a1eec1520c74aa1dd5abc5b0a6ca2d679a42faf0ffc8dc7263c81d93d41e" Namespace="calico-system" Pod="calico-kube-controllers-5d6784dd66-rscc9" WorkloadEndpoint="ci--4081--3--7--e--ee64700b2a-k8s-calico--kube--controllers--5d6784dd66--rscc9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--e--ee64700b2a-k8s-calico--kube--controllers--5d6784dd66--rscc9-eth0", GenerateName:"calico-kube-controllers-5d6784dd66-", Namespace:"calico-system", SelfLink:"", UID:"6839a9ef-85b8-4ddf-8641-0352b0c9dd4c", ResourceVersion:"885", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 19, 24, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5d6784dd66", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-e-ee64700b2a", ContainerID:"aa35a1eec1520c74aa1dd5abc5b0a6ca2d679a42faf0ffc8dc7263c81d93d41e", Pod:"calico-kube-controllers-5d6784dd66-rscc9", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.126.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calied041ec83bc", MAC:"ee:4e:f4:8d:11:20", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 19:24:24.598838 containerd[1480]: 2026-04-13 19:24:24.583 [INFO][3886] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="aa35a1eec1520c74aa1dd5abc5b0a6ca2d679a42faf0ffc8dc7263c81d93d41e" Namespace="calico-system" Pod="calico-kube-controllers-5d6784dd66-rscc9" WorkloadEndpoint="ci--4081--3--7--e--ee64700b2a-k8s-calico--kube--controllers--5d6784dd66--rscc9-eth0" Apr 13 19:24:24.617947 containerd[1480]: time="2026-04-13T19:24:24.617845356Z" level=info msg="CreateContainer within sandbox \"bcc77f05c636f8bc7b233e8a6eb16af9bd0032af641059c2bdc62286dcd4da27\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 13 19:24:24.618105 containerd[1480]: time="2026-04-13T19:24:24.594182754Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 19:24:24.618105 containerd[1480]: time="2026-04-13T19:24:24.594247594Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 19:24:24.618105 containerd[1480]: time="2026-04-13T19:24:24.594275794Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:24:24.618105 containerd[1480]: time="2026-04-13T19:24:24.594393593Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:24:24.667729 containerd[1480]: time="2026-04-13T19:24:24.667585738Z" level=info msg="CreateContainer within sandbox \"bcc77f05c636f8bc7b233e8a6eb16af9bd0032af641059c2bdc62286dcd4da27\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ee074823a146368a08733e9f928b5d0e519b419be75a894f47e33fc77a2088c1\"" Apr 13 19:24:24.668162 systemd[1]: Started cri-containerd-f5929a3d881c479ea7e6c922b34ea382f35151a2dde6f70ae22962d85d5af9c7.scope - libcontainer container f5929a3d881c479ea7e6c922b34ea382f35151a2dde6f70ae22962d85d5af9c7. Apr 13 19:24:24.671525 containerd[1480]: time="2026-04-13T19:24:24.671349986Z" level=info msg="StartContainer for \"ee074823a146368a08733e9f928b5d0e519b419be75a894f47e33fc77a2088c1\"" Apr 13 19:24:24.687568 containerd[1480]: time="2026-04-13T19:24:24.686955535Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 19:24:24.687568 containerd[1480]: time="2026-04-13T19:24:24.687065574Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 19:24:24.687568 containerd[1480]: time="2026-04-13T19:24:24.687080974Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:24:24.687876 containerd[1480]: time="2026-04-13T19:24:24.687396052Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:24:24.715763 systemd[1]: Started cri-containerd-aa35a1eec1520c74aa1dd5abc5b0a6ca2d679a42faf0ffc8dc7263c81d93d41e.scope - libcontainer container aa35a1eec1520c74aa1dd5abc5b0a6ca2d679a42faf0ffc8dc7263c81d93d41e. Apr 13 19:24:24.755262 kubelet[2577]: I0413 19:24:24.755198 2577 prober_manager.go:356] "Failed to trigger a manual run" probe="Readiness" Apr 13 19:24:24.770779 systemd[1]: Started cri-containerd-ee074823a146368a08733e9f928b5d0e519b419be75a894f47e33fc77a2088c1.scope - libcontainer container ee074823a146368a08733e9f928b5d0e519b419be75a894f47e33fc77a2088c1. Apr 13 19:24:24.916310 systemd[1]: Created slice kubepods-besteffort-podf44c28c5_accd_4c53_a6ec_fbd772b4aca8.slice - libcontainer container kubepods-besteffort-podf44c28c5_accd_4c53_a6ec_fbd772b4aca8.slice. Apr 13 19:24:24.939923 containerd[1480]: time="2026-04-13T19:24:24.939755292Z" level=info msg="StartContainer for \"ee074823a146368a08733e9f928b5d0e519b419be75a894f47e33fc77a2088c1\" returns successfully" Apr 13 19:24:24.970891 kubelet[2577]: I0413 19:24:24.970377 2577 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/f44c28c5-accd-4c53-a6ec-fbd772b4aca8-whisker-backend-key-pair\") pod \"whisker-d6c4b6b4b-c72cn\" (UID: \"f44c28c5-accd-4c53-a6ec-fbd772b4aca8\") " pod="calico-system/whisker-d6c4b6b4b-c72cn" Apr 13 19:24:24.970891 kubelet[2577]: I0413 19:24:24.970438 2577 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f44c28c5-accd-4c53-a6ec-fbd772b4aca8-whisker-ca-bundle\") pod \"whisker-d6c4b6b4b-c72cn\" (UID: \"f44c28c5-accd-4c53-a6ec-fbd772b4aca8\") " pod="calico-system/whisker-d6c4b6b4b-c72cn" Apr 13 19:24:24.970891 kubelet[2577]: I0413 19:24:24.970468 2577 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/f44c28c5-accd-4c53-a6ec-fbd772b4aca8-nginx-config\") pod \"whisker-d6c4b6b4b-c72cn\" (UID: \"f44c28c5-accd-4c53-a6ec-fbd772b4aca8\") " pod="calico-system/whisker-d6c4b6b4b-c72cn" Apr 13 19:24:24.970891 kubelet[2577]: I0413 19:24:24.970486 2577 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xczbk\" (UniqueName: \"kubernetes.io/projected/f44c28c5-accd-4c53-a6ec-fbd772b4aca8-kube-api-access-xczbk\") pod \"whisker-d6c4b6b4b-c72cn\" (UID: \"f44c28c5-accd-4c53-a6ec-fbd772b4aca8\") " pod="calico-system/whisker-d6c4b6b4b-c72cn" Apr 13 19:24:25.020214 containerd[1480]: time="2026-04-13T19:24:25.019971069Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-dcfc9864-8nhqr,Uid:b2a4adae-1965-4b2d-8d19-84656dfbab7d,Namespace:calico-system,Attempt:1,} returns sandbox id \"f5929a3d881c479ea7e6c922b34ea382f35151a2dde6f70ae22962d85d5af9c7\"" Apr 13 19:24:25.103390 containerd[1480]: time="2026-04-13T19:24:25.102521859Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5d6784dd66-rscc9,Uid:6839a9ef-85b8-4ddf-8641-0352b0c9dd4c,Namespace:calico-system,Attempt:1,} returns sandbox id \"aa35a1eec1520c74aa1dd5abc5b0a6ca2d679a42faf0ffc8dc7263c81d93d41e\"" Apr 13 19:24:25.228813 containerd[1480]: time="2026-04-13T19:24:25.228688265Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-d6c4b6b4b-c72cn,Uid:f44c28c5-accd-4c53-a6ec-fbd772b4aca8,Namespace:calico-system,Attempt:0,}" Apr 13 19:24:25.332629 systemd-networkd[1371]: calic0ab767e582: Gained IPv6LL Apr 13 19:24:25.491151 systemd-networkd[1371]: cali43b80f1d73e: Link UP Apr 13 19:24:25.491704 systemd-networkd[1371]: cali43b80f1d73e: Gained carrier Apr 13 19:24:25.512904 containerd[1480]: 2026-04-13 19:24:25.320 [ERROR][4203] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 13 19:24:25.512904 containerd[1480]: 2026-04-13 19:24:25.345 [INFO][4203] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--7--e--ee64700b2a-k8s-whisker--d6c4b6b4b--c72cn-eth0 whisker-d6c4b6b4b- calico-system f44c28c5-accd-4c53-a6ec-fbd772b4aca8 915 0 2026-04-13 19:24:24 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:d6c4b6b4b projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4081-3-7-e-ee64700b2a whisker-d6c4b6b4b-c72cn eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali43b80f1d73e [] [] }} ContainerID="9a4d92793d63e4e2c3566adc215547201d725290d19b2c8c772598031dd780c5" Namespace="calico-system" Pod="whisker-d6c4b6b4b-c72cn" WorkloadEndpoint="ci--4081--3--7--e--ee64700b2a-k8s-whisker--d6c4b6b4b--c72cn-" Apr 13 19:24:25.512904 containerd[1480]: 2026-04-13 19:24:25.346 [INFO][4203] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9a4d92793d63e4e2c3566adc215547201d725290d19b2c8c772598031dd780c5" Namespace="calico-system" Pod="whisker-d6c4b6b4b-c72cn" WorkloadEndpoint="ci--4081--3--7--e--ee64700b2a-k8s-whisker--d6c4b6b4b--c72cn-eth0" Apr 13 19:24:25.512904 containerd[1480]: 2026-04-13 19:24:25.420 [INFO][4225] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9a4d92793d63e4e2c3566adc215547201d725290d19b2c8c772598031dd780c5" HandleID="k8s-pod-network.9a4d92793d63e4e2c3566adc215547201d725290d19b2c8c772598031dd780c5" Workload="ci--4081--3--7--e--ee64700b2a-k8s-whisker--d6c4b6b4b--c72cn-eth0" Apr 13 19:24:25.512904 containerd[1480]: 2026-04-13 19:24:25.434 [INFO][4225] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="9a4d92793d63e4e2c3566adc215547201d725290d19b2c8c772598031dd780c5" HandleID="k8s-pod-network.9a4d92793d63e4e2c3566adc215547201d725290d19b2c8c772598031dd780c5" Workload="ci--4081--3--7--e--ee64700b2a-k8s-whisker--d6c4b6b4b--c72cn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003fc170), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-7-e-ee64700b2a", "pod":"whisker-d6c4b6b4b-c72cn", "timestamp":"2026-04-13 19:24:25.420795553 +0000 UTC"}, Hostname:"ci-4081-3-7-e-ee64700b2a", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x4000186580)} Apr 13 19:24:25.512904 containerd[1480]: 2026-04-13 19:24:25.434 [INFO][4225] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 19:24:25.512904 containerd[1480]: 2026-04-13 19:24:25.434 [INFO][4225] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 19:24:25.512904 containerd[1480]: 2026-04-13 19:24:25.435 [INFO][4225] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-7-e-ee64700b2a' Apr 13 19:24:25.512904 containerd[1480]: 2026-04-13 19:24:25.438 [INFO][4225] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.9a4d92793d63e4e2c3566adc215547201d725290d19b2c8c772598031dd780c5" host="ci-4081-3-7-e-ee64700b2a" Apr 13 19:24:25.512904 containerd[1480]: 2026-04-13 19:24:25.446 [INFO][4225] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081-3-7-e-ee64700b2a" Apr 13 19:24:25.512904 containerd[1480]: 2026-04-13 19:24:25.454 [INFO][4225] ipam/ipam.go 526: Trying affinity for 192.168.126.64/26 host="ci-4081-3-7-e-ee64700b2a" Apr 13 19:24:25.512904 containerd[1480]: 2026-04-13 19:24:25.457 [INFO][4225] ipam/ipam.go 160: Attempting to load block cidr=192.168.126.64/26 host="ci-4081-3-7-e-ee64700b2a" Apr 13 19:24:25.512904 containerd[1480]: 2026-04-13 19:24:25.461 [INFO][4225] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.126.64/26 host="ci-4081-3-7-e-ee64700b2a" Apr 13 19:24:25.512904 containerd[1480]: 2026-04-13 19:24:25.462 [INFO][4225] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.126.64/26 handle="k8s-pod-network.9a4d92793d63e4e2c3566adc215547201d725290d19b2c8c772598031dd780c5" host="ci-4081-3-7-e-ee64700b2a" Apr 13 19:24:25.512904 containerd[1480]: 2026-04-13 19:24:25.464 [INFO][4225] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.9a4d92793d63e4e2c3566adc215547201d725290d19b2c8c772598031dd780c5 Apr 13 19:24:25.512904 containerd[1480]: 2026-04-13 19:24:25.473 [INFO][4225] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.126.64/26 handle="k8s-pod-network.9a4d92793d63e4e2c3566adc215547201d725290d19b2c8c772598031dd780c5" host="ci-4081-3-7-e-ee64700b2a" Apr 13 19:24:25.512904 containerd[1480]: 2026-04-13 19:24:25.483 [INFO][4225] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.126.69/26] block=192.168.126.64/26 handle="k8s-pod-network.9a4d92793d63e4e2c3566adc215547201d725290d19b2c8c772598031dd780c5" host="ci-4081-3-7-e-ee64700b2a" Apr 13 19:24:25.512904 containerd[1480]: 2026-04-13 19:24:25.483 [INFO][4225] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.126.69/26] handle="k8s-pod-network.9a4d92793d63e4e2c3566adc215547201d725290d19b2c8c772598031dd780c5" host="ci-4081-3-7-e-ee64700b2a" Apr 13 19:24:25.512904 containerd[1480]: 2026-04-13 19:24:25.483 [INFO][4225] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 19:24:25.512904 containerd[1480]: 2026-04-13 19:24:25.484 [INFO][4225] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.126.69/26] IPv6=[] ContainerID="9a4d92793d63e4e2c3566adc215547201d725290d19b2c8c772598031dd780c5" HandleID="k8s-pod-network.9a4d92793d63e4e2c3566adc215547201d725290d19b2c8c772598031dd780c5" Workload="ci--4081--3--7--e--ee64700b2a-k8s-whisker--d6c4b6b4b--c72cn-eth0" Apr 13 19:24:25.515361 containerd[1480]: 2026-04-13 19:24:25.488 [INFO][4203] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9a4d92793d63e4e2c3566adc215547201d725290d19b2c8c772598031dd780c5" Namespace="calico-system" Pod="whisker-d6c4b6b4b-c72cn" WorkloadEndpoint="ci--4081--3--7--e--ee64700b2a-k8s-whisker--d6c4b6b4b--c72cn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--e--ee64700b2a-k8s-whisker--d6c4b6b4b--c72cn-eth0", GenerateName:"whisker-d6c4b6b4b-", Namespace:"calico-system", SelfLink:"", UID:"f44c28c5-accd-4c53-a6ec-fbd772b4aca8", ResourceVersion:"915", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 19, 24, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"d6c4b6b4b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-e-ee64700b2a", ContainerID:"", Pod:"whisker-d6c4b6b4b-c72cn", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.126.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali43b80f1d73e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 19:24:25.515361 containerd[1480]: 2026-04-13 19:24:25.488 [INFO][4203] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.126.69/32] ContainerID="9a4d92793d63e4e2c3566adc215547201d725290d19b2c8c772598031dd780c5" Namespace="calico-system" Pod="whisker-d6c4b6b4b-c72cn" WorkloadEndpoint="ci--4081--3--7--e--ee64700b2a-k8s-whisker--d6c4b6b4b--c72cn-eth0" Apr 13 19:24:25.515361 containerd[1480]: 2026-04-13 19:24:25.488 [INFO][4203] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali43b80f1d73e ContainerID="9a4d92793d63e4e2c3566adc215547201d725290d19b2c8c772598031dd780c5" Namespace="calico-system" Pod="whisker-d6c4b6b4b-c72cn" WorkloadEndpoint="ci--4081--3--7--e--ee64700b2a-k8s-whisker--d6c4b6b4b--c72cn-eth0" Apr 13 19:24:25.515361 containerd[1480]: 2026-04-13 19:24:25.492 [INFO][4203] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9a4d92793d63e4e2c3566adc215547201d725290d19b2c8c772598031dd780c5" Namespace="calico-system" Pod="whisker-d6c4b6b4b-c72cn" WorkloadEndpoint="ci--4081--3--7--e--ee64700b2a-k8s-whisker--d6c4b6b4b--c72cn-eth0" Apr 13 19:24:25.515361 containerd[1480]: 2026-04-13 19:24:25.493 [INFO][4203] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9a4d92793d63e4e2c3566adc215547201d725290d19b2c8c772598031dd780c5" Namespace="calico-system" Pod="whisker-d6c4b6b4b-c72cn" WorkloadEndpoint="ci--4081--3--7--e--ee64700b2a-k8s-whisker--d6c4b6b4b--c72cn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--e--ee64700b2a-k8s-whisker--d6c4b6b4b--c72cn-eth0", GenerateName:"whisker-d6c4b6b4b-", Namespace:"calico-system", SelfLink:"", UID:"f44c28c5-accd-4c53-a6ec-fbd772b4aca8", ResourceVersion:"915", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 19, 24, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"d6c4b6b4b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-e-ee64700b2a", ContainerID:"9a4d92793d63e4e2c3566adc215547201d725290d19b2c8c772598031dd780c5", Pod:"whisker-d6c4b6b4b-c72cn", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.126.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali43b80f1d73e", MAC:"02:4d:6a:ff:d6:41", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 19:24:25.515361 containerd[1480]: 2026-04-13 19:24:25.506 [INFO][4203] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9a4d92793d63e4e2c3566adc215547201d725290d19b2c8c772598031dd780c5" Namespace="calico-system" Pod="whisker-d6c4b6b4b-c72cn" WorkloadEndpoint="ci--4081--3--7--e--ee64700b2a-k8s-whisker--d6c4b6b4b--c72cn-eth0" Apr 13 19:24:25.548523 containerd[1480]: time="2026-04-13T19:24:25.547176678Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 19:24:25.548523 containerd[1480]: time="2026-04-13T19:24:25.548506667Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 19:24:25.548791 containerd[1480]: time="2026-04-13T19:24:25.548573787Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:24:25.548791 containerd[1480]: time="2026-04-13T19:24:25.548715666Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:24:25.555003 kernel: calico-node[4036]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Apr 13 19:24:25.585695 systemd[1]: Started cri-containerd-9a4d92793d63e4e2c3566adc215547201d725290d19b2c8c772598031dd780c5.scope - libcontainer container 9a4d92793d63e4e2c3566adc215547201d725290d19b2c8c772598031dd780c5. Apr 13 19:24:25.668507 containerd[1480]: time="2026-04-13T19:24:25.666756216Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-d6c4b6b4b-c72cn,Uid:f44c28c5-accd-4c53-a6ec-fbd772b4aca8,Namespace:calico-system,Attempt:0,} returns sandbox id \"9a4d92793d63e4e2c3566adc215547201d725290d19b2c8c772598031dd780c5\"" Apr 13 19:24:25.792479 kubelet[2577]: I0413 19:24:25.788279 2577 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-c7f65" podStartSLOduration=35.78824138 podStartE2EDuration="35.78824138s" podCreationTimestamp="2026-04-13 19:23:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-13 19:24:25.787775464 +0000 UTC m=+41.461258940" watchObservedRunningTime="2026-04-13 19:24:25.78824138 +0000 UTC m=+41.461724816" Apr 13 19:24:26.077036 systemd-networkd[1371]: vxlan.calico: Link UP Apr 13 19:24:26.077045 systemd-networkd[1371]: vxlan.calico: Gained carrier Apr 13 19:24:26.164643 systemd-networkd[1371]: cali18050ad82e5: Gained IPv6LL Apr 13 19:24:26.228729 systemd-networkd[1371]: calied041ec83bc: Gained IPv6LL Apr 13 19:24:26.476601 kubelet[2577]: I0413 19:24:26.475939 2577 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="c4e502e5-dd94-49d8-bd2f-926114e20b59" path="/var/lib/kubelet/pods/c4e502e5-dd94-49d8-bd2f-926114e20b59/volumes" Apr 13 19:24:26.484850 systemd-networkd[1371]: cali94161c4eaef: Gained IPv6LL Apr 13 19:24:26.952372 containerd[1480]: time="2026-04-13T19:24:26.952323403Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:24:26.954622 containerd[1480]: time="2026-04-13T19:24:26.954498347Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.31.4: active requests=0, bytes read=8261497" Apr 13 19:24:26.955989 containerd[1480]: time="2026-04-13T19:24:26.955726898Z" level=info msg="ImageCreate event name:\"sha256:9cb4086a1b408b52c6b14e0b81520060e1766ee0243508d29d8a53c7b518051f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:24:26.960258 containerd[1480]: time="2026-04-13T19:24:26.960199585Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:24:26.961174 containerd[1480]: time="2026-04-13T19:24:26.961119658Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.31.4\" with image id \"sha256:9cb4086a1b408b52c6b14e0b81520060e1766ee0243508d29d8a53c7b518051f\", repo tag \"ghcr.io/flatcar/calico/csi:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\", size \"9659022\" in 3.011384863s" Apr 13 19:24:26.961174 containerd[1480]: time="2026-04-13T19:24:26.961170538Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\" returns image reference \"sha256:9cb4086a1b408b52c6b14e0b81520060e1766ee0243508d29d8a53c7b518051f\"" Apr 13 19:24:26.963298 containerd[1480]: time="2026-04-13T19:24:26.963260602Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Apr 13 19:24:26.968494 containerd[1480]: time="2026-04-13T19:24:26.968451204Z" level=info msg="CreateContainer within sandbox \"69528ceb434765d8dfff734419795545fdfcbd5681609795637729b6fd57ebe1\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Apr 13 19:24:27.017704 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount662254437.mount: Deactivated successfully. Apr 13 19:24:27.021826 containerd[1480]: time="2026-04-13T19:24:27.021775260Z" level=info msg="CreateContainer within sandbox \"69528ceb434765d8dfff734419795545fdfcbd5681609795637729b6fd57ebe1\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"79ec6f5e5840e5aa1f20aa03eaccbb01952d9b6de5a4d83363f2b8c916436ba5\"" Apr 13 19:24:27.023720 containerd[1480]: time="2026-04-13T19:24:27.023667847Z" level=info msg="StartContainer for \"79ec6f5e5840e5aa1f20aa03eaccbb01952d9b6de5a4d83363f2b8c916436ba5\"" Apr 13 19:24:27.066818 systemd[1]: Started cri-containerd-79ec6f5e5840e5aa1f20aa03eaccbb01952d9b6de5a4d83363f2b8c916436ba5.scope - libcontainer container 79ec6f5e5840e5aa1f20aa03eaccbb01952d9b6de5a4d83363f2b8c916436ba5. Apr 13 19:24:27.101333 containerd[1480]: time="2026-04-13T19:24:27.101251630Z" level=info msg="StartContainer for \"79ec6f5e5840e5aa1f20aa03eaccbb01952d9b6de5a4d83363f2b8c916436ba5\" returns successfully" Apr 13 19:24:27.188796 systemd-networkd[1371]: cali43b80f1d73e: Gained IPv6LL Apr 13 19:24:28.023688 systemd-networkd[1371]: vxlan.calico: Gained IPv6LL Apr 13 19:24:29.300919 containerd[1480]: time="2026-04-13T19:24:29.300753294Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:24:29.302623 containerd[1480]: time="2026-04-13T19:24:29.302568683Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=45552315" Apr 13 19:24:29.303964 containerd[1480]: time="2026-04-13T19:24:29.303891154Z" level=info msg="ImageCreate event name:\"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:24:29.307561 containerd[1480]: time="2026-04-13T19:24:29.307505732Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:24:29.309020 containerd[1480]: time="2026-04-13T19:24:29.308820045Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"46949856\" in 2.345511923s" Apr 13 19:24:29.309020 containerd[1480]: time="2026-04-13T19:24:29.308866044Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\"" Apr 13 19:24:29.311240 containerd[1480]: time="2026-04-13T19:24:29.310736633Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\"" Apr 13 19:24:29.316470 containerd[1480]: time="2026-04-13T19:24:29.316317479Z" level=info msg="CreateContainer within sandbox \"f5929a3d881c479ea7e6c922b34ea382f35151a2dde6f70ae22962d85d5af9c7\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 13 19:24:29.334695 containerd[1480]: time="2026-04-13T19:24:29.334637527Z" level=info msg="CreateContainer within sandbox \"f5929a3d881c479ea7e6c922b34ea382f35151a2dde6f70ae22962d85d5af9c7\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"5ec5d93cbfdf0e7781da120223badcd1db64b0d5cd679da5dda719cf92dd75a9\"" Apr 13 19:24:29.338441 containerd[1480]: time="2026-04-13T19:24:29.336456916Z" level=info msg="StartContainer for \"5ec5d93cbfdf0e7781da120223badcd1db64b0d5cd679da5dda719cf92dd75a9\"" Apr 13 19:24:29.387857 systemd[1]: Started cri-containerd-5ec5d93cbfdf0e7781da120223badcd1db64b0d5cd679da5dda719cf92dd75a9.scope - libcontainer container 5ec5d93cbfdf0e7781da120223badcd1db64b0d5cd679da5dda719cf92dd75a9. Apr 13 19:24:29.429372 containerd[1480]: time="2026-04-13T19:24:29.429298672Z" level=info msg="StartContainer for \"5ec5d93cbfdf0e7781da120223badcd1db64b0d5cd679da5dda719cf92dd75a9\" returns successfully" Apr 13 19:24:30.792560 kubelet[2577]: I0413 19:24:30.792100 2577 prober_manager.go:356] "Failed to trigger a manual run" probe="Readiness" Apr 13 19:24:31.722202 kubelet[2577]: I0413 19:24:31.721683 2577 prober_manager.go:356] "Failed to trigger a manual run" probe="Readiness" Apr 13 19:24:31.844588 systemd[1]: run-containerd-runc-k8s.io-341c818dd88e66c265cd9ab267c89b8ba665336c5f5817a19dfd70dfac5db21c-runc.n53T8y.mount: Deactivated successfully. Apr 13 19:24:31.860685 kubelet[2577]: I0413 19:24:31.858075 2577 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-apiserver-dcfc9864-8nhqr" podStartSLOduration=22.572266709 podStartE2EDuration="26.858047512s" podCreationTimestamp="2026-04-13 19:24:05 +0000 UTC" firstStartedPulling="2026-04-13 19:24:25.024341674 +0000 UTC m=+40.697825110" lastFinishedPulling="2026-04-13 19:24:29.310122477 +0000 UTC m=+44.983605913" observedRunningTime="2026-04-13 19:24:29.81106003 +0000 UTC m=+45.484543466" watchObservedRunningTime="2026-04-13 19:24:31.858047512 +0000 UTC m=+47.531530908" Apr 13 19:24:32.552681 containerd[1480]: time="2026-04-13T19:24:32.552621863Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:24:32.554110 containerd[1480]: time="2026-04-13T19:24:32.554045256Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.31.4: active requests=0, bytes read=49189955" Apr 13 19:24:32.555772 containerd[1480]: time="2026-04-13T19:24:32.555664128Z" level=info msg="ImageCreate event name:\"sha256:e80fe1ce4f06b0791c077492cd9d5ebf00125a02bbafdcd04d2a64e10cc4ad95\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:24:32.560957 containerd[1480]: time="2026-04-13T19:24:32.560550904Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:24:32.561834 containerd[1480]: time="2026-04-13T19:24:32.561779737Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" with image id \"sha256:e80fe1ce4f06b0791c077492cd9d5ebf00125a02bbafdcd04d2a64e10cc4ad95\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\", size \"50587448\" in 3.250991944s" Apr 13 19:24:32.561953 containerd[1480]: time="2026-04-13T19:24:32.561915057Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" returns image reference \"sha256:e80fe1ce4f06b0791c077492cd9d5ebf00125a02bbafdcd04d2a64e10cc4ad95\"" Apr 13 19:24:32.565489 containerd[1480]: time="2026-04-13T19:24:32.564090486Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\"" Apr 13 19:24:32.583272 containerd[1480]: time="2026-04-13T19:24:32.583235910Z" level=info msg="CreateContainer within sandbox \"aa35a1eec1520c74aa1dd5abc5b0a6ca2d679a42faf0ffc8dc7263c81d93d41e\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Apr 13 19:24:32.604002 containerd[1480]: time="2026-04-13T19:24:32.603952886Z" level=info msg="CreateContainer within sandbox \"aa35a1eec1520c74aa1dd5abc5b0a6ca2d679a42faf0ffc8dc7263c81d93d41e\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"d9dc7308502c2d327702bf9093b1fe1949394ef9e9059166ea47b490184f44db\"" Apr 13 19:24:32.605361 containerd[1480]: time="2026-04-13T19:24:32.605314359Z" level=info msg="StartContainer for \"d9dc7308502c2d327702bf9093b1fe1949394ef9e9059166ea47b490184f44db\"" Apr 13 19:24:32.636687 systemd[1]: Started cri-containerd-d9dc7308502c2d327702bf9093b1fe1949394ef9e9059166ea47b490184f44db.scope - libcontainer container d9dc7308502c2d327702bf9093b1fe1949394ef9e9059166ea47b490184f44db. Apr 13 19:24:32.727449 containerd[1480]: time="2026-04-13T19:24:32.726216473Z" level=info msg="StartContainer for \"d9dc7308502c2d327702bf9093b1fe1949394ef9e9059166ea47b490184f44db\" returns successfully" Apr 13 19:24:33.865964 kubelet[2577]: I0413 19:24:33.865879 2577 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-5d6784dd66-rscc9" podStartSLOduration=19.416212873 podStartE2EDuration="26.865862953s" podCreationTimestamp="2026-04-13 19:24:07 +0000 UTC" firstStartedPulling="2026-04-13 19:24:25.114151967 +0000 UTC m=+40.787635363" lastFinishedPulling="2026-04-13 19:24:32.563802007 +0000 UTC m=+48.237285443" observedRunningTime="2026-04-13 19:24:32.828237442 +0000 UTC m=+48.501720838" watchObservedRunningTime="2026-04-13 19:24:33.865862953 +0000 UTC m=+49.539346389" Apr 13 19:24:34.277108 containerd[1480]: time="2026-04-13T19:24:34.276959383Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:24:34.279404 containerd[1480]: time="2026-04-13T19:24:34.279336492Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.31.4: active requests=0, bytes read=5882804" Apr 13 19:24:34.281086 containerd[1480]: time="2026-04-13T19:24:34.281021405Z" level=info msg="ImageCreate event name:\"sha256:51af4e9dcdb93e51b26a4a6f99272ec2df8de1aef256bb746f2c7c844b8e7b2c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:24:34.284348 containerd[1480]: time="2026-04-13T19:24:34.284288791Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:24:34.285983 containerd[1480]: time="2026-04-13T19:24:34.285798104Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.31.4\" with image id \"sha256:51af4e9dcdb93e51b26a4a6f99272ec2df8de1aef256bb746f2c7c844b8e7b2c\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\", size \"7280321\" in 1.721665778s" Apr 13 19:24:34.285983 containerd[1480]: time="2026-04-13T19:24:34.285849744Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\" returns image reference \"sha256:51af4e9dcdb93e51b26a4a6f99272ec2df8de1aef256bb746f2c7c844b8e7b2c\"" Apr 13 19:24:34.288415 containerd[1480]: time="2026-04-13T19:24:34.288356933Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\"" Apr 13 19:24:34.291840 containerd[1480]: time="2026-04-13T19:24:34.291789157Z" level=info msg="CreateContainer within sandbox \"9a4d92793d63e4e2c3566adc215547201d725290d19b2c8c772598031dd780c5\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Apr 13 19:24:34.324274 containerd[1480]: time="2026-04-13T19:24:34.324073615Z" level=info msg="CreateContainer within sandbox \"9a4d92793d63e4e2c3566adc215547201d725290d19b2c8c772598031dd780c5\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"10ac02746e81705bec2041f91d0578b82cc5de85351d2881430678bd7582cc18\"" Apr 13 19:24:34.332819 containerd[1480]: time="2026-04-13T19:24:34.332763497Z" level=info msg="StartContainer for \"10ac02746e81705bec2041f91d0578b82cc5de85351d2881430678bd7582cc18\"" Apr 13 19:24:34.346154 kubelet[2577]: I0413 19:24:34.346099 2577 prober_manager.go:356] "Failed to trigger a manual run" probe="Readiness" Apr 13 19:24:34.383666 systemd[1]: Started cri-containerd-10ac02746e81705bec2041f91d0578b82cc5de85351d2881430678bd7582cc18.scope - libcontainer container 10ac02746e81705bec2041f91d0578b82cc5de85351d2881430678bd7582cc18. Apr 13 19:24:34.478734 containerd[1480]: time="2026-04-13T19:24:34.478681374Z" level=info msg="StartContainer for \"10ac02746e81705bec2041f91d0578b82cc5de85351d2881430678bd7582cc18\" returns successfully" Apr 13 19:24:35.471879 containerd[1480]: time="2026-04-13T19:24:35.471399651Z" level=info msg="StopPodSandbox for \"561f945af7426abfc16d5ccc5c2a5786df8b22dc85386ae68c08c2b430fa2f77\"" Apr 13 19:24:35.471879 containerd[1480]: time="2026-04-13T19:24:35.471614131Z" level=info msg="StopPodSandbox for \"a5235e47504c513cfa717aa5a80227a4b51127936d8ede6c41a55871982e7b83\"" Apr 13 19:24:35.686338 containerd[1480]: 2026-04-13 19:24:35.611 [INFO][4720] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="a5235e47504c513cfa717aa5a80227a4b51127936d8ede6c41a55871982e7b83" Apr 13 19:24:35.686338 containerd[1480]: 2026-04-13 19:24:35.611 [INFO][4720] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="a5235e47504c513cfa717aa5a80227a4b51127936d8ede6c41a55871982e7b83" iface="eth0" netns="/var/run/netns/cni-c121a376-f904-2b8d-b522-24b609ff7bd9" Apr 13 19:24:35.686338 containerd[1480]: 2026-04-13 19:24:35.611 [INFO][4720] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="a5235e47504c513cfa717aa5a80227a4b51127936d8ede6c41a55871982e7b83" iface="eth0" netns="/var/run/netns/cni-c121a376-f904-2b8d-b522-24b609ff7bd9" Apr 13 19:24:35.686338 containerd[1480]: 2026-04-13 19:24:35.611 [INFO][4720] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="a5235e47504c513cfa717aa5a80227a4b51127936d8ede6c41a55871982e7b83" iface="eth0" netns="/var/run/netns/cni-c121a376-f904-2b8d-b522-24b609ff7bd9" Apr 13 19:24:35.686338 containerd[1480]: 2026-04-13 19:24:35.611 [INFO][4720] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="a5235e47504c513cfa717aa5a80227a4b51127936d8ede6c41a55871982e7b83" Apr 13 19:24:35.686338 containerd[1480]: 2026-04-13 19:24:35.611 [INFO][4720] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="a5235e47504c513cfa717aa5a80227a4b51127936d8ede6c41a55871982e7b83" Apr 13 19:24:35.686338 containerd[1480]: 2026-04-13 19:24:35.655 [INFO][4737] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="a5235e47504c513cfa717aa5a80227a4b51127936d8ede6c41a55871982e7b83" HandleID="k8s-pod-network.a5235e47504c513cfa717aa5a80227a4b51127936d8ede6c41a55871982e7b83" Workload="ci--4081--3--7--e--ee64700b2a-k8s-goldmane--9f7667bb8--224z2-eth0" Apr 13 19:24:35.686338 containerd[1480]: 2026-04-13 19:24:35.657 [INFO][4737] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 19:24:35.686338 containerd[1480]: 2026-04-13 19:24:35.657 [INFO][4737] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 19:24:35.686338 containerd[1480]: 2026-04-13 19:24:35.675 [WARNING][4737] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="a5235e47504c513cfa717aa5a80227a4b51127936d8ede6c41a55871982e7b83" HandleID="k8s-pod-network.a5235e47504c513cfa717aa5a80227a4b51127936d8ede6c41a55871982e7b83" Workload="ci--4081--3--7--e--ee64700b2a-k8s-goldmane--9f7667bb8--224z2-eth0" Apr 13 19:24:35.686338 containerd[1480]: 2026-04-13 19:24:35.675 [INFO][4737] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="a5235e47504c513cfa717aa5a80227a4b51127936d8ede6c41a55871982e7b83" HandleID="k8s-pod-network.a5235e47504c513cfa717aa5a80227a4b51127936d8ede6c41a55871982e7b83" Workload="ci--4081--3--7--e--ee64700b2a-k8s-goldmane--9f7667bb8--224z2-eth0" Apr 13 19:24:35.686338 containerd[1480]: 2026-04-13 19:24:35.678 [INFO][4737] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 19:24:35.686338 containerd[1480]: 2026-04-13 19:24:35.683 [INFO][4720] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="a5235e47504c513cfa717aa5a80227a4b51127936d8ede6c41a55871982e7b83" Apr 13 19:24:35.690686 containerd[1480]: time="2026-04-13T19:24:35.688957713Z" level=info msg="TearDown network for sandbox \"a5235e47504c513cfa717aa5a80227a4b51127936d8ede6c41a55871982e7b83\" successfully" Apr 13 19:24:35.690373 systemd[1]: run-netns-cni\x2dc121a376\x2df904\x2d2b8d\x2db522\x2d24b609ff7bd9.mount: Deactivated successfully. Apr 13 19:24:35.691563 containerd[1480]: time="2026-04-13T19:24:35.689002713Z" level=info msg="StopPodSandbox for \"a5235e47504c513cfa717aa5a80227a4b51127936d8ede6c41a55871982e7b83\" returns successfully" Apr 13 19:24:35.712541 containerd[1480]: time="2026-04-13T19:24:35.712242217Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-9f7667bb8-224z2,Uid:46a789ab-50b2-4393-9d38-e8e911daa169,Namespace:calico-system,Attempt:1,}" Apr 13 19:24:35.712928 containerd[1480]: 2026-04-13 19:24:35.594 [INFO][4719] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="561f945af7426abfc16d5ccc5c2a5786df8b22dc85386ae68c08c2b430fa2f77" Apr 13 19:24:35.712928 containerd[1480]: 2026-04-13 19:24:35.594 [INFO][4719] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="561f945af7426abfc16d5ccc5c2a5786df8b22dc85386ae68c08c2b430fa2f77" iface="eth0" netns="/var/run/netns/cni-0ad448b4-22ae-05c7-214e-cfe0d54201ae" Apr 13 19:24:35.712928 containerd[1480]: 2026-04-13 19:24:35.595 [INFO][4719] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="561f945af7426abfc16d5ccc5c2a5786df8b22dc85386ae68c08c2b430fa2f77" iface="eth0" netns="/var/run/netns/cni-0ad448b4-22ae-05c7-214e-cfe0d54201ae" Apr 13 19:24:35.712928 containerd[1480]: 2026-04-13 19:24:35.596 [INFO][4719] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="561f945af7426abfc16d5ccc5c2a5786df8b22dc85386ae68c08c2b430fa2f77" iface="eth0" netns="/var/run/netns/cni-0ad448b4-22ae-05c7-214e-cfe0d54201ae" Apr 13 19:24:35.712928 containerd[1480]: 2026-04-13 19:24:35.596 [INFO][4719] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="561f945af7426abfc16d5ccc5c2a5786df8b22dc85386ae68c08c2b430fa2f77" Apr 13 19:24:35.712928 containerd[1480]: 2026-04-13 19:24:35.596 [INFO][4719] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="561f945af7426abfc16d5ccc5c2a5786df8b22dc85386ae68c08c2b430fa2f77" Apr 13 19:24:35.712928 containerd[1480]: 2026-04-13 19:24:35.671 [INFO][4732] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="561f945af7426abfc16d5ccc5c2a5786df8b22dc85386ae68c08c2b430fa2f77" HandleID="k8s-pod-network.561f945af7426abfc16d5ccc5c2a5786df8b22dc85386ae68c08c2b430fa2f77" Workload="ci--4081--3--7--e--ee64700b2a-k8s-coredns--7d764666f9--ngg5x-eth0" Apr 13 19:24:35.712928 containerd[1480]: 2026-04-13 19:24:35.672 [INFO][4732] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 19:24:35.712928 containerd[1480]: 2026-04-13 19:24:35.678 [INFO][4732] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 19:24:35.712928 containerd[1480]: 2026-04-13 19:24:35.697 [WARNING][4732] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="561f945af7426abfc16d5ccc5c2a5786df8b22dc85386ae68c08c2b430fa2f77" HandleID="k8s-pod-network.561f945af7426abfc16d5ccc5c2a5786df8b22dc85386ae68c08c2b430fa2f77" Workload="ci--4081--3--7--e--ee64700b2a-k8s-coredns--7d764666f9--ngg5x-eth0" Apr 13 19:24:35.712928 containerd[1480]: 2026-04-13 19:24:35.697 [INFO][4732] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="561f945af7426abfc16d5ccc5c2a5786df8b22dc85386ae68c08c2b430fa2f77" HandleID="k8s-pod-network.561f945af7426abfc16d5ccc5c2a5786df8b22dc85386ae68c08c2b430fa2f77" Workload="ci--4081--3--7--e--ee64700b2a-k8s-coredns--7d764666f9--ngg5x-eth0" Apr 13 19:24:35.712928 containerd[1480]: 2026-04-13 19:24:35.702 [INFO][4732] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 19:24:35.712928 containerd[1480]: 2026-04-13 19:24:35.709 [INFO][4719] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="561f945af7426abfc16d5ccc5c2a5786df8b22dc85386ae68c08c2b430fa2f77" Apr 13 19:24:35.719621 containerd[1480]: time="2026-04-13T19:24:35.716605959Z" level=info msg="TearDown network for sandbox \"561f945af7426abfc16d5ccc5c2a5786df8b22dc85386ae68c08c2b430fa2f77\" successfully" Apr 13 19:24:35.719621 containerd[1480]: time="2026-04-13T19:24:35.716649359Z" level=info msg="StopPodSandbox for \"561f945af7426abfc16d5ccc5c2a5786df8b22dc85386ae68c08c2b430fa2f77\" returns successfully" Apr 13 19:24:35.718517 systemd[1]: run-netns-cni\x2d0ad448b4\x2d22ae\x2d05c7\x2d214e\x2dcfe0d54201ae.mount: Deactivated successfully. Apr 13 19:24:35.722577 containerd[1480]: time="2026-04-13T19:24:35.722397855Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-ngg5x,Uid:69279b5d-27a1-4477-a9f1-aa1c68505c9c,Namespace:kube-system,Attempt:1,}" Apr 13 19:24:36.031810 systemd-networkd[1371]: caliacc1d1c96a0: Link UP Apr 13 19:24:36.034342 systemd-networkd[1371]: caliacc1d1c96a0: Gained carrier Apr 13 19:24:36.062168 containerd[1480]: 2026-04-13 19:24:35.833 [INFO][4746] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--7--e--ee64700b2a-k8s-goldmane--9f7667bb8--224z2-eth0 goldmane-9f7667bb8- calico-system 46a789ab-50b2-4393-9d38-e8e911daa169 1001 0 2026-04-13 19:24:06 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:9f7667bb8 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4081-3-7-e-ee64700b2a goldmane-9f7667bb8-224z2 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] caliacc1d1c96a0 [] [] }} ContainerID="1209f41a936b400080be1f465a29081006a9ce776844f51e1057c9e0aef6ded8" Namespace="calico-system" Pod="goldmane-9f7667bb8-224z2" WorkloadEndpoint="ci--4081--3--7--e--ee64700b2a-k8s-goldmane--9f7667bb8--224z2-" Apr 13 19:24:36.062168 containerd[1480]: 2026-04-13 19:24:35.833 [INFO][4746] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1209f41a936b400080be1f465a29081006a9ce776844f51e1057c9e0aef6ded8" Namespace="calico-system" Pod="goldmane-9f7667bb8-224z2" WorkloadEndpoint="ci--4081--3--7--e--ee64700b2a-k8s-goldmane--9f7667bb8--224z2-eth0" Apr 13 19:24:36.062168 containerd[1480]: 2026-04-13 19:24:35.891 [INFO][4772] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1209f41a936b400080be1f465a29081006a9ce776844f51e1057c9e0aef6ded8" HandleID="k8s-pod-network.1209f41a936b400080be1f465a29081006a9ce776844f51e1057c9e0aef6ded8" Workload="ci--4081--3--7--e--ee64700b2a-k8s-goldmane--9f7667bb8--224z2-eth0" Apr 13 19:24:36.062168 containerd[1480]: 2026-04-13 19:24:35.910 [INFO][4772] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="1209f41a936b400080be1f465a29081006a9ce776844f51e1057c9e0aef6ded8" HandleID="k8s-pod-network.1209f41a936b400080be1f465a29081006a9ce776844f51e1057c9e0aef6ded8" Workload="ci--4081--3--7--e--ee64700b2a-k8s-goldmane--9f7667bb8--224z2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002fb580), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-7-e-ee64700b2a", "pod":"goldmane-9f7667bb8-224z2", "timestamp":"2026-04-13 19:24:35.891255238 +0000 UTC"}, Hostname:"ci-4081-3-7-e-ee64700b2a", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x400055cf20)} Apr 13 19:24:36.062168 containerd[1480]: 2026-04-13 19:24:35.910 [INFO][4772] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 19:24:36.062168 containerd[1480]: 2026-04-13 19:24:35.910 [INFO][4772] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 19:24:36.062168 containerd[1480]: 2026-04-13 19:24:35.911 [INFO][4772] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-7-e-ee64700b2a' Apr 13 19:24:36.062168 containerd[1480]: 2026-04-13 19:24:35.917 [INFO][4772] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.1209f41a936b400080be1f465a29081006a9ce776844f51e1057c9e0aef6ded8" host="ci-4081-3-7-e-ee64700b2a" Apr 13 19:24:36.062168 containerd[1480]: 2026-04-13 19:24:35.929 [INFO][4772] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081-3-7-e-ee64700b2a" Apr 13 19:24:36.062168 containerd[1480]: 2026-04-13 19:24:35.942 [INFO][4772] ipam/ipam.go 526: Trying affinity for 192.168.126.64/26 host="ci-4081-3-7-e-ee64700b2a" Apr 13 19:24:36.062168 containerd[1480]: 2026-04-13 19:24:35.947 [INFO][4772] ipam/ipam.go 160: Attempting to load block cidr=192.168.126.64/26 host="ci-4081-3-7-e-ee64700b2a" Apr 13 19:24:36.062168 containerd[1480]: 2026-04-13 19:24:35.954 [INFO][4772] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.126.64/26 host="ci-4081-3-7-e-ee64700b2a" Apr 13 19:24:36.062168 containerd[1480]: 2026-04-13 19:24:35.954 [INFO][4772] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.126.64/26 handle="k8s-pod-network.1209f41a936b400080be1f465a29081006a9ce776844f51e1057c9e0aef6ded8" host="ci-4081-3-7-e-ee64700b2a" Apr 13 19:24:36.062168 containerd[1480]: 2026-04-13 19:24:35.957 [INFO][4772] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.1209f41a936b400080be1f465a29081006a9ce776844f51e1057c9e0aef6ded8 Apr 13 19:24:36.062168 containerd[1480]: 2026-04-13 19:24:35.970 [INFO][4772] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.126.64/26 handle="k8s-pod-network.1209f41a936b400080be1f465a29081006a9ce776844f51e1057c9e0aef6ded8" host="ci-4081-3-7-e-ee64700b2a" Apr 13 19:24:36.062168 containerd[1480]: 2026-04-13 19:24:35.987 [INFO][4772] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.126.70/26] block=192.168.126.64/26 handle="k8s-pod-network.1209f41a936b400080be1f465a29081006a9ce776844f51e1057c9e0aef6ded8" host="ci-4081-3-7-e-ee64700b2a" Apr 13 19:24:36.062168 containerd[1480]: 2026-04-13 19:24:35.987 [INFO][4772] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.126.70/26] handle="k8s-pod-network.1209f41a936b400080be1f465a29081006a9ce776844f51e1057c9e0aef6ded8" host="ci-4081-3-7-e-ee64700b2a" Apr 13 19:24:36.062168 containerd[1480]: 2026-04-13 19:24:35.987 [INFO][4772] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 19:24:36.062168 containerd[1480]: 2026-04-13 19:24:35.988 [INFO][4772] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.126.70/26] IPv6=[] ContainerID="1209f41a936b400080be1f465a29081006a9ce776844f51e1057c9e0aef6ded8" HandleID="k8s-pod-network.1209f41a936b400080be1f465a29081006a9ce776844f51e1057c9e0aef6ded8" Workload="ci--4081--3--7--e--ee64700b2a-k8s-goldmane--9f7667bb8--224z2-eth0" Apr 13 19:24:36.064378 containerd[1480]: 2026-04-13 19:24:35.992 [INFO][4746] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1209f41a936b400080be1f465a29081006a9ce776844f51e1057c9e0aef6ded8" Namespace="calico-system" Pod="goldmane-9f7667bb8-224z2" WorkloadEndpoint="ci--4081--3--7--e--ee64700b2a-k8s-goldmane--9f7667bb8--224z2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--e--ee64700b2a-k8s-goldmane--9f7667bb8--224z2-eth0", GenerateName:"goldmane-9f7667bb8-", Namespace:"calico-system", SelfLink:"", UID:"46a789ab-50b2-4393-9d38-e8e911daa169", ResourceVersion:"1001", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 19, 24, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"9f7667bb8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-e-ee64700b2a", ContainerID:"", Pod:"goldmane-9f7667bb8-224z2", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.126.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"caliacc1d1c96a0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 19:24:36.064378 containerd[1480]: 2026-04-13 19:24:35.993 [INFO][4746] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.126.70/32] ContainerID="1209f41a936b400080be1f465a29081006a9ce776844f51e1057c9e0aef6ded8" Namespace="calico-system" Pod="goldmane-9f7667bb8-224z2" WorkloadEndpoint="ci--4081--3--7--e--ee64700b2a-k8s-goldmane--9f7667bb8--224z2-eth0" Apr 13 19:24:36.064378 containerd[1480]: 2026-04-13 19:24:35.993 [INFO][4746] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliacc1d1c96a0 ContainerID="1209f41a936b400080be1f465a29081006a9ce776844f51e1057c9e0aef6ded8" Namespace="calico-system" Pod="goldmane-9f7667bb8-224z2" WorkloadEndpoint="ci--4081--3--7--e--ee64700b2a-k8s-goldmane--9f7667bb8--224z2-eth0" Apr 13 19:24:36.064378 containerd[1480]: 2026-04-13 19:24:36.038 [INFO][4746] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1209f41a936b400080be1f465a29081006a9ce776844f51e1057c9e0aef6ded8" Namespace="calico-system" Pod="goldmane-9f7667bb8-224z2" WorkloadEndpoint="ci--4081--3--7--e--ee64700b2a-k8s-goldmane--9f7667bb8--224z2-eth0" Apr 13 19:24:36.064378 containerd[1480]: 2026-04-13 19:24:36.041 [INFO][4746] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1209f41a936b400080be1f465a29081006a9ce776844f51e1057c9e0aef6ded8" Namespace="calico-system" Pod="goldmane-9f7667bb8-224z2" WorkloadEndpoint="ci--4081--3--7--e--ee64700b2a-k8s-goldmane--9f7667bb8--224z2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--e--ee64700b2a-k8s-goldmane--9f7667bb8--224z2-eth0", GenerateName:"goldmane-9f7667bb8-", Namespace:"calico-system", SelfLink:"", UID:"46a789ab-50b2-4393-9d38-e8e911daa169", ResourceVersion:"1001", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 19, 24, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"9f7667bb8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-e-ee64700b2a", ContainerID:"1209f41a936b400080be1f465a29081006a9ce776844f51e1057c9e0aef6ded8", Pod:"goldmane-9f7667bb8-224z2", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.126.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"caliacc1d1c96a0", MAC:"42:f8:5d:1a:de:1a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 19:24:36.064378 containerd[1480]: 2026-04-13 19:24:36.056 [INFO][4746] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1209f41a936b400080be1f465a29081006a9ce776844f51e1057c9e0aef6ded8" Namespace="calico-system" Pod="goldmane-9f7667bb8-224z2" WorkloadEndpoint="ci--4081--3--7--e--ee64700b2a-k8s-goldmane--9f7667bb8--224z2-eth0" Apr 13 19:24:36.160075 containerd[1480]: time="2026-04-13T19:24:36.156820701Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 19:24:36.160075 containerd[1480]: time="2026-04-13T19:24:36.156906621Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 19:24:36.160075 containerd[1480]: time="2026-04-13T19:24:36.156923341Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:24:36.160075 containerd[1480]: time="2026-04-13T19:24:36.157060780Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:24:36.186539 systemd-networkd[1371]: cali6c54e05b757: Link UP Apr 13 19:24:36.187608 systemd-networkd[1371]: cali6c54e05b757: Gained carrier Apr 13 19:24:36.227679 systemd[1]: Started cri-containerd-1209f41a936b400080be1f465a29081006a9ce776844f51e1057c9e0aef6ded8.scope - libcontainer container 1209f41a936b400080be1f465a29081006a9ce776844f51e1057c9e0aef6ded8. Apr 13 19:24:36.236838 containerd[1480]: 2026-04-13 19:24:35.847 [INFO][4756] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--7--e--ee64700b2a-k8s-coredns--7d764666f9--ngg5x-eth0 coredns-7d764666f9- kube-system 69279b5d-27a1-4477-a9f1-aa1c68505c9c 1000 0 2026-04-13 19:23:50 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7d764666f9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-3-7-e-ee64700b2a coredns-7d764666f9-ngg5x eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali6c54e05b757 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="1d1c6c36359401ddc1f0444ad739f75ef9116d2104209a0f62b62da672514489" Namespace="kube-system" Pod="coredns-7d764666f9-ngg5x" WorkloadEndpoint="ci--4081--3--7--e--ee64700b2a-k8s-coredns--7d764666f9--ngg5x-" Apr 13 19:24:36.236838 containerd[1480]: 2026-04-13 19:24:35.847 [INFO][4756] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1d1c6c36359401ddc1f0444ad739f75ef9116d2104209a0f62b62da672514489" Namespace="kube-system" Pod="coredns-7d764666f9-ngg5x" WorkloadEndpoint="ci--4081--3--7--e--ee64700b2a-k8s-coredns--7d764666f9--ngg5x-eth0" Apr 13 19:24:36.236838 containerd[1480]: 2026-04-13 19:24:35.958 [INFO][4779] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1d1c6c36359401ddc1f0444ad739f75ef9116d2104209a0f62b62da672514489" HandleID="k8s-pod-network.1d1c6c36359401ddc1f0444ad739f75ef9116d2104209a0f62b62da672514489" Workload="ci--4081--3--7--e--ee64700b2a-k8s-coredns--7d764666f9--ngg5x-eth0" Apr 13 19:24:36.236838 containerd[1480]: 2026-04-13 19:24:35.987 [INFO][4779] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="1d1c6c36359401ddc1f0444ad739f75ef9116d2104209a0f62b62da672514489" HandleID="k8s-pod-network.1d1c6c36359401ddc1f0444ad739f75ef9116d2104209a0f62b62da672514489" Workload="ci--4081--3--7--e--ee64700b2a-k8s-coredns--7d764666f9--ngg5x-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000381dc0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-3-7-e-ee64700b2a", "pod":"coredns-7d764666f9-ngg5x", "timestamp":"2026-04-13 19:24:35.958100122 +0000 UTC"}, Hostname:"ci-4081-3-7-e-ee64700b2a", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x400047a6e0)} Apr 13 19:24:36.236838 containerd[1480]: 2026-04-13 19:24:35.987 [INFO][4779] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 19:24:36.236838 containerd[1480]: 2026-04-13 19:24:35.987 [INFO][4779] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 19:24:36.236838 containerd[1480]: 2026-04-13 19:24:35.988 [INFO][4779] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-7-e-ee64700b2a' Apr 13 19:24:36.236838 containerd[1480]: 2026-04-13 19:24:36.022 [INFO][4779] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.1d1c6c36359401ddc1f0444ad739f75ef9116d2104209a0f62b62da672514489" host="ci-4081-3-7-e-ee64700b2a" Apr 13 19:24:36.236838 containerd[1480]: 2026-04-13 19:24:36.045 [INFO][4779] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081-3-7-e-ee64700b2a" Apr 13 19:24:36.236838 containerd[1480]: 2026-04-13 19:24:36.073 [INFO][4779] ipam/ipam.go 526: Trying affinity for 192.168.126.64/26 host="ci-4081-3-7-e-ee64700b2a" Apr 13 19:24:36.236838 containerd[1480]: 2026-04-13 19:24:36.081 [INFO][4779] ipam/ipam.go 160: Attempting to load block cidr=192.168.126.64/26 host="ci-4081-3-7-e-ee64700b2a" Apr 13 19:24:36.236838 containerd[1480]: 2026-04-13 19:24:36.092 [INFO][4779] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.126.64/26 host="ci-4081-3-7-e-ee64700b2a" Apr 13 19:24:36.236838 containerd[1480]: 2026-04-13 19:24:36.092 [INFO][4779] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.126.64/26 handle="k8s-pod-network.1d1c6c36359401ddc1f0444ad739f75ef9116d2104209a0f62b62da672514489" host="ci-4081-3-7-e-ee64700b2a" Apr 13 19:24:36.236838 containerd[1480]: 2026-04-13 19:24:36.100 [INFO][4779] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.1d1c6c36359401ddc1f0444ad739f75ef9116d2104209a0f62b62da672514489 Apr 13 19:24:36.236838 containerd[1480]: 2026-04-13 19:24:36.111 [INFO][4779] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.126.64/26 handle="k8s-pod-network.1d1c6c36359401ddc1f0444ad739f75ef9116d2104209a0f62b62da672514489" host="ci-4081-3-7-e-ee64700b2a" Apr 13 19:24:36.236838 containerd[1480]: 2026-04-13 19:24:36.137 [INFO][4779] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.126.71/26] block=192.168.126.64/26 handle="k8s-pod-network.1d1c6c36359401ddc1f0444ad739f75ef9116d2104209a0f62b62da672514489" host="ci-4081-3-7-e-ee64700b2a" Apr 13 19:24:36.236838 containerd[1480]: 2026-04-13 19:24:36.138 [INFO][4779] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.126.71/26] handle="k8s-pod-network.1d1c6c36359401ddc1f0444ad739f75ef9116d2104209a0f62b62da672514489" host="ci-4081-3-7-e-ee64700b2a" Apr 13 19:24:36.236838 containerd[1480]: 2026-04-13 19:24:36.139 [INFO][4779] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 19:24:36.236838 containerd[1480]: 2026-04-13 19:24:36.140 [INFO][4779] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.126.71/26] IPv6=[] ContainerID="1d1c6c36359401ddc1f0444ad739f75ef9116d2104209a0f62b62da672514489" HandleID="k8s-pod-network.1d1c6c36359401ddc1f0444ad739f75ef9116d2104209a0f62b62da672514489" Workload="ci--4081--3--7--e--ee64700b2a-k8s-coredns--7d764666f9--ngg5x-eth0" Apr 13 19:24:36.237389 containerd[1480]: 2026-04-13 19:24:36.172 [INFO][4756] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1d1c6c36359401ddc1f0444ad739f75ef9116d2104209a0f62b62da672514489" Namespace="kube-system" Pod="coredns-7d764666f9-ngg5x" WorkloadEndpoint="ci--4081--3--7--e--ee64700b2a-k8s-coredns--7d764666f9--ngg5x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--e--ee64700b2a-k8s-coredns--7d764666f9--ngg5x-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"69279b5d-27a1-4477-a9f1-aa1c68505c9c", ResourceVersion:"1000", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 19, 23, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-e-ee64700b2a", ContainerID:"", Pod:"coredns-7d764666f9-ngg5x", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.126.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6c54e05b757", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 19:24:36.237389 containerd[1480]: 2026-04-13 19:24:36.172 [INFO][4756] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.126.71/32] ContainerID="1d1c6c36359401ddc1f0444ad739f75ef9116d2104209a0f62b62da672514489" Namespace="kube-system" Pod="coredns-7d764666f9-ngg5x" WorkloadEndpoint="ci--4081--3--7--e--ee64700b2a-k8s-coredns--7d764666f9--ngg5x-eth0" Apr 13 19:24:36.237389 containerd[1480]: 2026-04-13 19:24:36.172 [INFO][4756] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6c54e05b757 ContainerID="1d1c6c36359401ddc1f0444ad739f75ef9116d2104209a0f62b62da672514489" Namespace="kube-system" Pod="coredns-7d764666f9-ngg5x" WorkloadEndpoint="ci--4081--3--7--e--ee64700b2a-k8s-coredns--7d764666f9--ngg5x-eth0" Apr 13 19:24:36.237389 containerd[1480]: 2026-04-13 19:24:36.190 [INFO][4756] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1d1c6c36359401ddc1f0444ad739f75ef9116d2104209a0f62b62da672514489" Namespace="kube-system" Pod="coredns-7d764666f9-ngg5x" WorkloadEndpoint="ci--4081--3--7--e--ee64700b2a-k8s-coredns--7d764666f9--ngg5x-eth0" Apr 13 19:24:36.237389 containerd[1480]: 2026-04-13 19:24:36.191 [INFO][4756] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1d1c6c36359401ddc1f0444ad739f75ef9116d2104209a0f62b62da672514489" Namespace="kube-system" Pod="coredns-7d764666f9-ngg5x" WorkloadEndpoint="ci--4081--3--7--e--ee64700b2a-k8s-coredns--7d764666f9--ngg5x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--e--ee64700b2a-k8s-coredns--7d764666f9--ngg5x-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"69279b5d-27a1-4477-a9f1-aa1c68505c9c", ResourceVersion:"1000", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 19, 23, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-e-ee64700b2a", ContainerID:"1d1c6c36359401ddc1f0444ad739f75ef9116d2104209a0f62b62da672514489", Pod:"coredns-7d764666f9-ngg5x", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.126.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6c54e05b757", MAC:"72:89:61:d9:fe:3f", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 19:24:36.238722 containerd[1480]: 2026-04-13 19:24:36.219 [INFO][4756] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1d1c6c36359401ddc1f0444ad739f75ef9116d2104209a0f62b62da672514489" Namespace="kube-system" Pod="coredns-7d764666f9-ngg5x" WorkloadEndpoint="ci--4081--3--7--e--ee64700b2a-k8s-coredns--7d764666f9--ngg5x-eth0" Apr 13 19:24:36.334063 containerd[1480]: time="2026-04-13T19:24:36.333923616Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-9f7667bb8-224z2,Uid:46a789ab-50b2-4393-9d38-e8e911daa169,Namespace:calico-system,Attempt:1,} returns sandbox id \"1209f41a936b400080be1f465a29081006a9ce776844f51e1057c9e0aef6ded8\"" Apr 13 19:24:36.344805 containerd[1480]: time="2026-04-13T19:24:36.343659738Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 19:24:36.344805 containerd[1480]: time="2026-04-13T19:24:36.343765698Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 19:24:36.344805 containerd[1480]: time="2026-04-13T19:24:36.343788578Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:24:36.344805 containerd[1480]: time="2026-04-13T19:24:36.343876697Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:24:36.391736 systemd[1]: Started cri-containerd-1d1c6c36359401ddc1f0444ad739f75ef9116d2104209a0f62b62da672514489.scope - libcontainer container 1d1c6c36359401ddc1f0444ad739f75ef9116d2104209a0f62b62da672514489. Apr 13 19:24:36.521948 containerd[1480]: time="2026-04-13T19:24:36.521893488Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-ngg5x,Uid:69279b5d-27a1-4477-a9f1-aa1c68505c9c,Namespace:kube-system,Attempt:1,} returns sandbox id \"1d1c6c36359401ddc1f0444ad739f75ef9116d2104209a0f62b62da672514489\"" Apr 13 19:24:36.540055 containerd[1480]: time="2026-04-13T19:24:36.539451420Z" level=info msg="CreateContainer within sandbox \"1d1c6c36359401ddc1f0444ad739f75ef9116d2104209a0f62b62da672514489\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 13 19:24:36.542180 containerd[1480]: time="2026-04-13T19:24:36.542116570Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:24:36.545206 containerd[1480]: time="2026-04-13T19:24:36.545148198Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4: active requests=0, bytes read=13766291" Apr 13 19:24:36.546848 containerd[1480]: time="2026-04-13T19:24:36.546788792Z" level=info msg="ImageCreate event name:\"sha256:8195c49a3b504e7ef58a8fc9a0e9ae66ae6ae90ef4998c04591be9588e8fa07e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:24:36.554242 containerd[1480]: time="2026-04-13T19:24:36.554183803Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:24:36.555135 containerd[1480]: time="2026-04-13T19:24:36.555086360Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" with image id \"sha256:8195c49a3b504e7ef58a8fc9a0e9ae66ae6ae90ef4998c04591be9588e8fa07e\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\", size \"15163768\" in 2.266679308s" Apr 13 19:24:36.555135 containerd[1480]: time="2026-04-13T19:24:36.555131360Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" returns image reference \"sha256:8195c49a3b504e7ef58a8fc9a0e9ae66ae6ae90ef4998c04591be9588e8fa07e\"" Apr 13 19:24:36.559355 containerd[1480]: time="2026-04-13T19:24:36.558337067Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\"" Apr 13 19:24:36.566803 containerd[1480]: time="2026-04-13T19:24:36.566731835Z" level=info msg="CreateContainer within sandbox \"69528ceb434765d8dfff734419795545fdfcbd5681609795637729b6fd57ebe1\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Apr 13 19:24:36.578947 containerd[1480]: time="2026-04-13T19:24:36.578886668Z" level=info msg="CreateContainer within sandbox \"1d1c6c36359401ddc1f0444ad739f75ef9116d2104209a0f62b62da672514489\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c37cbadcf95227388419850a153176faab6379e977db5d8e32e9f9c74bdb11ab\"" Apr 13 19:24:36.580212 containerd[1480]: time="2026-04-13T19:24:36.580113623Z" level=info msg="StartContainer for \"c37cbadcf95227388419850a153176faab6379e977db5d8e32e9f9c74bdb11ab\"" Apr 13 19:24:36.609331 containerd[1480]: time="2026-04-13T19:24:36.608350954Z" level=info msg="CreateContainer within sandbox \"69528ceb434765d8dfff734419795545fdfcbd5681609795637729b6fd57ebe1\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"94335395197eeae051e7483f07158e2a5d4ef5b9c2953f88fdb4dd9ad5fccdd5\"" Apr 13 19:24:36.612448 containerd[1480]: time="2026-04-13T19:24:36.612382538Z" level=info msg="StartContainer for \"94335395197eeae051e7483f07158e2a5d4ef5b9c2953f88fdb4dd9ad5fccdd5\"" Apr 13 19:24:36.649643 systemd[1]: Started cri-containerd-c37cbadcf95227388419850a153176faab6379e977db5d8e32e9f9c74bdb11ab.scope - libcontainer container c37cbadcf95227388419850a153176faab6379e977db5d8e32e9f9c74bdb11ab. Apr 13 19:24:36.674709 systemd[1]: Started cri-containerd-94335395197eeae051e7483f07158e2a5d4ef5b9c2953f88fdb4dd9ad5fccdd5.scope - libcontainer container 94335395197eeae051e7483f07158e2a5d4ef5b9c2953f88fdb4dd9ad5fccdd5. Apr 13 19:24:36.707668 containerd[1480]: time="2026-04-13T19:24:36.707579889Z" level=info msg="StartContainer for \"c37cbadcf95227388419850a153176faab6379e977db5d8e32e9f9c74bdb11ab\" returns successfully" Apr 13 19:24:36.763828 containerd[1480]: time="2026-04-13T19:24:36.763732112Z" level=info msg="StartContainer for \"94335395197eeae051e7483f07158e2a5d4ef5b9c2953f88fdb4dd9ad5fccdd5\" returns successfully" Apr 13 19:24:36.887361 kubelet[2577]: I0413 19:24:36.887140 2577 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/csi-node-driver-b2r6n" podStartSLOduration=17.276310948 podStartE2EDuration="29.887124354s" podCreationTimestamp="2026-04-13 19:24:07 +0000 UTC" firstStartedPulling="2026-04-13 19:24:23.946269866 +0000 UTC m=+39.619753302" lastFinishedPulling="2026-04-13 19:24:36.557083272 +0000 UTC m=+52.230566708" observedRunningTime="2026-04-13 19:24:36.886481517 +0000 UTC m=+52.559964993" watchObservedRunningTime="2026-04-13 19:24:36.887124354 +0000 UTC m=+52.560607790" Apr 13 19:24:36.887361 kubelet[2577]: I0413 19:24:36.887295 2577 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-ngg5x" podStartSLOduration=46.887290834 podStartE2EDuration="46.887290834s" podCreationTimestamp="2026-04-13 19:23:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-13 19:24:36.860282738 +0000 UTC m=+52.533766174" watchObservedRunningTime="2026-04-13 19:24:36.887290834 +0000 UTC m=+52.560774270" Apr 13 19:24:37.494719 systemd-networkd[1371]: caliacc1d1c96a0: Gained IPv6LL Apr 13 19:24:37.556778 systemd-networkd[1371]: cali6c54e05b757: Gained IPv6LL Apr 13 19:24:37.602453 kubelet[2577]: I0413 19:24:37.601995 2577 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Apr 13 19:24:37.602453 kubelet[2577]: I0413 19:24:37.602028 2577 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Apr 13 19:24:38.398068 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3043762958.mount: Deactivated successfully. Apr 13 19:24:38.422332 containerd[1480]: time="2026-04-13T19:24:38.422263211Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:24:38.423993 containerd[1480]: time="2026-04-13T19:24:38.423930245Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.31.4: active requests=0, bytes read=16426594" Apr 13 19:24:38.424830 containerd[1480]: time="2026-04-13T19:24:38.424708323Z" level=info msg="ImageCreate event name:\"sha256:19fab8e13a4d97732973f299576e43f89b889ceff6e3768f711f30e6ace1c662\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:24:38.428111 containerd[1480]: time="2026-04-13T19:24:38.428045551Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:24:38.429043 containerd[1480]: time="2026-04-13T19:24:38.428919148Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" with image id \"sha256:19fab8e13a4d97732973f299576e43f89b889ceff6e3768f711f30e6ace1c662\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\", size \"16426424\" in 1.870529721s" Apr 13 19:24:38.429043 containerd[1480]: time="2026-04-13T19:24:38.428958028Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" returns image reference \"sha256:19fab8e13a4d97732973f299576e43f89b889ceff6e3768f711f30e6ace1c662\"" Apr 13 19:24:38.431196 containerd[1480]: time="2026-04-13T19:24:38.431052861Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\"" Apr 13 19:24:38.437648 containerd[1480]: time="2026-04-13T19:24:38.437603719Z" level=info msg="CreateContainer within sandbox \"9a4d92793d63e4e2c3566adc215547201d725290d19b2c8c772598031dd780c5\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Apr 13 19:24:38.472360 containerd[1480]: time="2026-04-13T19:24:38.472138041Z" level=info msg="StopPodSandbox for \"8060aebaa14993bbca32c6fd4d64719cf66229be1c4e4908e5327028d6c23f64\"" Apr 13 19:24:38.483453 containerd[1480]: time="2026-04-13T19:24:38.483382683Z" level=info msg="CreateContainer within sandbox \"9a4d92793d63e4e2c3566adc215547201d725290d19b2c8c772598031dd780c5\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"d5e819e3c2d1654530b90a26829ea13477f0e45ed2b7aeeebd4899bef4d232fa\"" Apr 13 19:24:38.486018 containerd[1480]: time="2026-04-13T19:24:38.485969234Z" level=info msg="StartContainer for \"d5e819e3c2d1654530b90a26829ea13477f0e45ed2b7aeeebd4899bef4d232fa\"" Apr 13 19:24:38.546076 systemd[1]: Started cri-containerd-d5e819e3c2d1654530b90a26829ea13477f0e45ed2b7aeeebd4899bef4d232fa.scope - libcontainer container d5e819e3c2d1654530b90a26829ea13477f0e45ed2b7aeeebd4899bef4d232fa. Apr 13 19:24:38.613838 containerd[1480]: time="2026-04-13T19:24:38.613784879Z" level=info msg="StartContainer for \"d5e819e3c2d1654530b90a26829ea13477f0e45ed2b7aeeebd4899bef4d232fa\" returns successfully" Apr 13 19:24:38.623704 containerd[1480]: 2026-04-13 19:24:38.560 [INFO][5022] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="8060aebaa14993bbca32c6fd4d64719cf66229be1c4e4908e5327028d6c23f64" Apr 13 19:24:38.623704 containerd[1480]: 2026-04-13 19:24:38.563 [INFO][5022] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="8060aebaa14993bbca32c6fd4d64719cf66229be1c4e4908e5327028d6c23f64" iface="eth0" netns="/var/run/netns/cni-2faa231c-a637-e476-88f8-509d856df5f2" Apr 13 19:24:38.623704 containerd[1480]: 2026-04-13 19:24:38.563 [INFO][5022] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="8060aebaa14993bbca32c6fd4d64719cf66229be1c4e4908e5327028d6c23f64" iface="eth0" netns="/var/run/netns/cni-2faa231c-a637-e476-88f8-509d856df5f2" Apr 13 19:24:38.623704 containerd[1480]: 2026-04-13 19:24:38.563 [INFO][5022] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="8060aebaa14993bbca32c6fd4d64719cf66229be1c4e4908e5327028d6c23f64" iface="eth0" netns="/var/run/netns/cni-2faa231c-a637-e476-88f8-509d856df5f2" Apr 13 19:24:38.623704 containerd[1480]: 2026-04-13 19:24:38.563 [INFO][5022] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="8060aebaa14993bbca32c6fd4d64719cf66229be1c4e4908e5327028d6c23f64" Apr 13 19:24:38.623704 containerd[1480]: 2026-04-13 19:24:38.563 [INFO][5022] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="8060aebaa14993bbca32c6fd4d64719cf66229be1c4e4908e5327028d6c23f64" Apr 13 19:24:38.623704 containerd[1480]: 2026-04-13 19:24:38.600 [INFO][5052] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="8060aebaa14993bbca32c6fd4d64719cf66229be1c4e4908e5327028d6c23f64" HandleID="k8s-pod-network.8060aebaa14993bbca32c6fd4d64719cf66229be1c4e4908e5327028d6c23f64" Workload="ci--4081--3--7--e--ee64700b2a-k8s-calico--apiserver--dcfc9864--vmpn9-eth0" Apr 13 19:24:38.623704 containerd[1480]: 2026-04-13 19:24:38.600 [INFO][5052] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 19:24:38.623704 containerd[1480]: 2026-04-13 19:24:38.601 [INFO][5052] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 19:24:38.623704 containerd[1480]: 2026-04-13 19:24:38.616 [WARNING][5052] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="8060aebaa14993bbca32c6fd4d64719cf66229be1c4e4908e5327028d6c23f64" HandleID="k8s-pod-network.8060aebaa14993bbca32c6fd4d64719cf66229be1c4e4908e5327028d6c23f64" Workload="ci--4081--3--7--e--ee64700b2a-k8s-calico--apiserver--dcfc9864--vmpn9-eth0" Apr 13 19:24:38.623704 containerd[1480]: 2026-04-13 19:24:38.616 [INFO][5052] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="8060aebaa14993bbca32c6fd4d64719cf66229be1c4e4908e5327028d6c23f64" HandleID="k8s-pod-network.8060aebaa14993bbca32c6fd4d64719cf66229be1c4e4908e5327028d6c23f64" Workload="ci--4081--3--7--e--ee64700b2a-k8s-calico--apiserver--dcfc9864--vmpn9-eth0" Apr 13 19:24:38.623704 containerd[1480]: 2026-04-13 19:24:38.619 [INFO][5052] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 19:24:38.623704 containerd[1480]: 2026-04-13 19:24:38.621 [INFO][5022] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="8060aebaa14993bbca32c6fd4d64719cf66229be1c4e4908e5327028d6c23f64" Apr 13 19:24:38.624614 containerd[1480]: time="2026-04-13T19:24:38.624460923Z" level=info msg="TearDown network for sandbox \"8060aebaa14993bbca32c6fd4d64719cf66229be1c4e4908e5327028d6c23f64\" successfully" Apr 13 19:24:38.624614 containerd[1480]: time="2026-04-13T19:24:38.624498363Z" level=info msg="StopPodSandbox for \"8060aebaa14993bbca32c6fd4d64719cf66229be1c4e4908e5327028d6c23f64\" returns successfully" Apr 13 19:24:38.628822 containerd[1480]: time="2026-04-13T19:24:38.628250870Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-dcfc9864-vmpn9,Uid:a434ed07-82db-484c-93a0-fd9b39b96e37,Namespace:calico-system,Attempt:1,}" Apr 13 19:24:38.783578 systemd-networkd[1371]: calie174883dcbe: Link UP Apr 13 19:24:38.784717 systemd-networkd[1371]: calie174883dcbe: Gained carrier Apr 13 19:24:38.809660 containerd[1480]: 2026-04-13 19:24:38.696 [INFO][5071] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--7--e--ee64700b2a-k8s-calico--apiserver--dcfc9864--vmpn9-eth0 calico-apiserver-dcfc9864- calico-system a434ed07-82db-484c-93a0-fd9b39b96e37 1036 0 2026-04-13 19:24:05 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:dcfc9864 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-7-e-ee64700b2a calico-apiserver-dcfc9864-vmpn9 eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] calie174883dcbe [] [] }} ContainerID="96a08a2458981ef5e38e297c7a2d5639673da1a6537148cd82167979af55cb08" Namespace="calico-system" Pod="calico-apiserver-dcfc9864-vmpn9" WorkloadEndpoint="ci--4081--3--7--e--ee64700b2a-k8s-calico--apiserver--dcfc9864--vmpn9-" Apr 13 19:24:38.809660 containerd[1480]: 2026-04-13 19:24:38.696 [INFO][5071] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="96a08a2458981ef5e38e297c7a2d5639673da1a6537148cd82167979af55cb08" Namespace="calico-system" Pod="calico-apiserver-dcfc9864-vmpn9" WorkloadEndpoint="ci--4081--3--7--e--ee64700b2a-k8s-calico--apiserver--dcfc9864--vmpn9-eth0" Apr 13 19:24:38.809660 containerd[1480]: 2026-04-13 19:24:38.724 [INFO][5084] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="96a08a2458981ef5e38e297c7a2d5639673da1a6537148cd82167979af55cb08" HandleID="k8s-pod-network.96a08a2458981ef5e38e297c7a2d5639673da1a6537148cd82167979af55cb08" Workload="ci--4081--3--7--e--ee64700b2a-k8s-calico--apiserver--dcfc9864--vmpn9-eth0" Apr 13 19:24:38.809660 containerd[1480]: 2026-04-13 19:24:38.734 [INFO][5084] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="96a08a2458981ef5e38e297c7a2d5639673da1a6537148cd82167979af55cb08" HandleID="k8s-pod-network.96a08a2458981ef5e38e297c7a2d5639673da1a6537148cd82167979af55cb08" Workload="ci--4081--3--7--e--ee64700b2a-k8s-calico--apiserver--dcfc9864--vmpn9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002736a0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-7-e-ee64700b2a", "pod":"calico-apiserver-dcfc9864-vmpn9", "timestamp":"2026-04-13 19:24:38.724018944 +0000 UTC"}, Hostname:"ci-4081-3-7-e-ee64700b2a", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x400030f4a0)} Apr 13 19:24:38.809660 containerd[1480]: 2026-04-13 19:24:38.734 [INFO][5084] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 19:24:38.809660 containerd[1480]: 2026-04-13 19:24:38.734 [INFO][5084] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 19:24:38.809660 containerd[1480]: 2026-04-13 19:24:38.734 [INFO][5084] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-7-e-ee64700b2a' Apr 13 19:24:38.809660 containerd[1480]: 2026-04-13 19:24:38.738 [INFO][5084] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.96a08a2458981ef5e38e297c7a2d5639673da1a6537148cd82167979af55cb08" host="ci-4081-3-7-e-ee64700b2a" Apr 13 19:24:38.809660 containerd[1480]: 2026-04-13 19:24:38.744 [INFO][5084] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081-3-7-e-ee64700b2a" Apr 13 19:24:38.809660 containerd[1480]: 2026-04-13 19:24:38.751 [INFO][5084] ipam/ipam.go 526: Trying affinity for 192.168.126.64/26 host="ci-4081-3-7-e-ee64700b2a" Apr 13 19:24:38.809660 containerd[1480]: 2026-04-13 19:24:38.753 [INFO][5084] ipam/ipam.go 160: Attempting to load block cidr=192.168.126.64/26 host="ci-4081-3-7-e-ee64700b2a" Apr 13 19:24:38.809660 containerd[1480]: 2026-04-13 19:24:38.756 [INFO][5084] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.126.64/26 host="ci-4081-3-7-e-ee64700b2a" Apr 13 19:24:38.809660 containerd[1480]: 2026-04-13 19:24:38.756 [INFO][5084] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.126.64/26 handle="k8s-pod-network.96a08a2458981ef5e38e297c7a2d5639673da1a6537148cd82167979af55cb08" host="ci-4081-3-7-e-ee64700b2a" Apr 13 19:24:38.809660 containerd[1480]: 2026-04-13 19:24:38.759 [INFO][5084] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.96a08a2458981ef5e38e297c7a2d5639673da1a6537148cd82167979af55cb08 Apr 13 19:24:38.809660 containerd[1480]: 2026-04-13 19:24:38.764 [INFO][5084] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.126.64/26 handle="k8s-pod-network.96a08a2458981ef5e38e297c7a2d5639673da1a6537148cd82167979af55cb08" host="ci-4081-3-7-e-ee64700b2a" Apr 13 19:24:38.809660 containerd[1480]: 2026-04-13 19:24:38.772 [INFO][5084] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.126.72/26] block=192.168.126.64/26 handle="k8s-pod-network.96a08a2458981ef5e38e297c7a2d5639673da1a6537148cd82167979af55cb08" host="ci-4081-3-7-e-ee64700b2a" Apr 13 19:24:38.809660 containerd[1480]: 2026-04-13 19:24:38.773 [INFO][5084] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.126.72/26] handle="k8s-pod-network.96a08a2458981ef5e38e297c7a2d5639673da1a6537148cd82167979af55cb08" host="ci-4081-3-7-e-ee64700b2a" Apr 13 19:24:38.809660 containerd[1480]: 2026-04-13 19:24:38.773 [INFO][5084] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 19:24:38.809660 containerd[1480]: 2026-04-13 19:24:38.773 [INFO][5084] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.126.72/26] IPv6=[] ContainerID="96a08a2458981ef5e38e297c7a2d5639673da1a6537148cd82167979af55cb08" HandleID="k8s-pod-network.96a08a2458981ef5e38e297c7a2d5639673da1a6537148cd82167979af55cb08" Workload="ci--4081--3--7--e--ee64700b2a-k8s-calico--apiserver--dcfc9864--vmpn9-eth0" Apr 13 19:24:38.810400 containerd[1480]: 2026-04-13 19:24:38.776 [INFO][5071] cni-plugin/k8s.go 418: Populated endpoint ContainerID="96a08a2458981ef5e38e297c7a2d5639673da1a6537148cd82167979af55cb08" Namespace="calico-system" Pod="calico-apiserver-dcfc9864-vmpn9" WorkloadEndpoint="ci--4081--3--7--e--ee64700b2a-k8s-calico--apiserver--dcfc9864--vmpn9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--e--ee64700b2a-k8s-calico--apiserver--dcfc9864--vmpn9-eth0", GenerateName:"calico-apiserver-dcfc9864-", Namespace:"calico-system", SelfLink:"", UID:"a434ed07-82db-484c-93a0-fd9b39b96e37", ResourceVersion:"1036", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 19, 24, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"dcfc9864", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-e-ee64700b2a", ContainerID:"", Pod:"calico-apiserver-dcfc9864-vmpn9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.126.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calie174883dcbe", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 19:24:38.810400 containerd[1480]: 2026-04-13 19:24:38.776 [INFO][5071] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.126.72/32] ContainerID="96a08a2458981ef5e38e297c7a2d5639673da1a6537148cd82167979af55cb08" Namespace="calico-system" Pod="calico-apiserver-dcfc9864-vmpn9" WorkloadEndpoint="ci--4081--3--7--e--ee64700b2a-k8s-calico--apiserver--dcfc9864--vmpn9-eth0" Apr 13 19:24:38.810400 containerd[1480]: 2026-04-13 19:24:38.776 [INFO][5071] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie174883dcbe ContainerID="96a08a2458981ef5e38e297c7a2d5639673da1a6537148cd82167979af55cb08" Namespace="calico-system" Pod="calico-apiserver-dcfc9864-vmpn9" WorkloadEndpoint="ci--4081--3--7--e--ee64700b2a-k8s-calico--apiserver--dcfc9864--vmpn9-eth0" Apr 13 19:24:38.810400 containerd[1480]: 2026-04-13 19:24:38.785 [INFO][5071] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="96a08a2458981ef5e38e297c7a2d5639673da1a6537148cd82167979af55cb08" Namespace="calico-system" Pod="calico-apiserver-dcfc9864-vmpn9" WorkloadEndpoint="ci--4081--3--7--e--ee64700b2a-k8s-calico--apiserver--dcfc9864--vmpn9-eth0" Apr 13 19:24:38.810400 containerd[1480]: 2026-04-13 19:24:38.786 [INFO][5071] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="96a08a2458981ef5e38e297c7a2d5639673da1a6537148cd82167979af55cb08" Namespace="calico-system" Pod="calico-apiserver-dcfc9864-vmpn9" WorkloadEndpoint="ci--4081--3--7--e--ee64700b2a-k8s-calico--apiserver--dcfc9864--vmpn9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--e--ee64700b2a-k8s-calico--apiserver--dcfc9864--vmpn9-eth0", GenerateName:"calico-apiserver-dcfc9864-", Namespace:"calico-system", SelfLink:"", UID:"a434ed07-82db-484c-93a0-fd9b39b96e37", ResourceVersion:"1036", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 19, 24, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"dcfc9864", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-e-ee64700b2a", ContainerID:"96a08a2458981ef5e38e297c7a2d5639673da1a6537148cd82167979af55cb08", Pod:"calico-apiserver-dcfc9864-vmpn9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.126.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calie174883dcbe", MAC:"72:c6:e3:e6:a1:9e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 19:24:38.810400 containerd[1480]: 2026-04-13 19:24:38.806 [INFO][5071] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="96a08a2458981ef5e38e297c7a2d5639673da1a6537148cd82167979af55cb08" Namespace="calico-system" Pod="calico-apiserver-dcfc9864-vmpn9" WorkloadEndpoint="ci--4081--3--7--e--ee64700b2a-k8s-calico--apiserver--dcfc9864--vmpn9-eth0" Apr 13 19:24:38.850663 containerd[1480]: time="2026-04-13T19:24:38.848831080Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 19:24:38.850663 containerd[1480]: time="2026-04-13T19:24:38.848912720Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 19:24:38.850663 containerd[1480]: time="2026-04-13T19:24:38.848929439Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:24:38.850663 containerd[1480]: time="2026-04-13T19:24:38.849013359Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:24:38.881428 kubelet[2577]: I0413 19:24:38.881331 2577 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/whisker-d6c4b6b4b-c72cn" podStartSLOduration=2.121529173 podStartE2EDuration="14.881302649s" podCreationTimestamp="2026-04-13 19:24:24 +0000 UTC" firstStartedPulling="2026-04-13 19:24:25.670255229 +0000 UTC m=+41.343738665" lastFinishedPulling="2026-04-13 19:24:38.430028665 +0000 UTC m=+54.103512141" observedRunningTime="2026-04-13 19:24:38.879525175 +0000 UTC m=+54.553008611" watchObservedRunningTime="2026-04-13 19:24:38.881302649 +0000 UTC m=+54.554786085" Apr 13 19:24:38.889613 systemd[1]: Started cri-containerd-96a08a2458981ef5e38e297c7a2d5639673da1a6537148cd82167979af55cb08.scope - libcontainer container 96a08a2458981ef5e38e297c7a2d5639673da1a6537148cd82167979af55cb08. Apr 13 19:24:39.048909 containerd[1480]: time="2026-04-13T19:24:39.048775090Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-dcfc9864-vmpn9,Uid:a434ed07-82db-484c-93a0-fd9b39b96e37,Namespace:calico-system,Attempt:1,} returns sandbox id \"96a08a2458981ef5e38e297c7a2d5639673da1a6537148cd82167979af55cb08\"" Apr 13 19:24:39.057648 containerd[1480]: time="2026-04-13T19:24:39.057593341Z" level=info msg="CreateContainer within sandbox \"96a08a2458981ef5e38e297c7a2d5639673da1a6537148cd82167979af55cb08\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 13 19:24:39.077840 containerd[1480]: time="2026-04-13T19:24:39.077779717Z" level=info msg="CreateContainer within sandbox \"96a08a2458981ef5e38e297c7a2d5639673da1a6537148cd82167979af55cb08\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"06fbd39f035804c54c49765416093b5e26f79b2ed57179c29d6803738f37a3f7\"" Apr 13 19:24:39.078347 containerd[1480]: time="2026-04-13T19:24:39.078314635Z" level=info msg="StartContainer for \"06fbd39f035804c54c49765416093b5e26f79b2ed57179c29d6803738f37a3f7\"" Apr 13 19:24:39.145710 systemd[1]: Started cri-containerd-06fbd39f035804c54c49765416093b5e26f79b2ed57179c29d6803738f37a3f7.scope - libcontainer container 06fbd39f035804c54c49765416093b5e26f79b2ed57179c29d6803738f37a3f7. Apr 13 19:24:39.192661 systemd[1]: run-netns-cni\x2d2faa231c\x2da637\x2de476\x2d88f8\x2d509d856df5f2.mount: Deactivated successfully. Apr 13 19:24:39.202765 containerd[1480]: time="2026-04-13T19:24:39.202719959Z" level=info msg="StartContainer for \"06fbd39f035804c54c49765416093b5e26f79b2ed57179c29d6803738f37a3f7\" returns successfully" Apr 13 19:24:39.893456 kubelet[2577]: I0413 19:24:39.892182 2577 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-apiserver-dcfc9864-vmpn9" podStartSLOduration=34.8921538 podStartE2EDuration="34.8921538s" podCreationTimestamp="2026-04-13 19:24:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-13 19:24:39.889680487 +0000 UTC m=+55.563163923" watchObservedRunningTime="2026-04-13 19:24:39.8921538 +0000 UTC m=+55.565637276" Apr 13 19:24:39.994270 systemd-networkd[1371]: calie174883dcbe: Gained IPv6LL Apr 13 19:24:41.585483 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount838184724.mount: Deactivated successfully. Apr 13 19:24:41.942129 containerd[1480]: time="2026-04-13T19:24:41.940800987Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:24:41.942129 containerd[1480]: time="2026-04-13T19:24:41.942004584Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.31.4: active requests=0, bytes read=51613980" Apr 13 19:24:41.943024 containerd[1480]: time="2026-04-13T19:24:41.942982861Z" level=info msg="ImageCreate event name:\"sha256:5274e98e9b12badfa0d6f106814630212e6de7abb8deaf896423b13e6ebdb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:24:41.945941 containerd[1480]: time="2026-04-13T19:24:41.945896853Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:24:41.946877 containerd[1480]: time="2026-04-13T19:24:41.946838850Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" with image id \"sha256:5274e98e9b12badfa0d6f106814630212e6de7abb8deaf896423b13e6ebdb41b\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\", size \"51613826\" in 3.515741909s" Apr 13 19:24:41.947005 containerd[1480]: time="2026-04-13T19:24:41.946987570Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" returns image reference \"sha256:5274e98e9b12badfa0d6f106814630212e6de7abb8deaf896423b13e6ebdb41b\"" Apr 13 19:24:41.953939 containerd[1480]: time="2026-04-13T19:24:41.953899911Z" level=info msg="CreateContainer within sandbox \"1209f41a936b400080be1f465a29081006a9ce776844f51e1057c9e0aef6ded8\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Apr 13 19:24:41.972473 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount51581882.mount: Deactivated successfully. Apr 13 19:24:41.979530 containerd[1480]: time="2026-04-13T19:24:41.979460439Z" level=info msg="CreateContainer within sandbox \"1209f41a936b400080be1f465a29081006a9ce776844f51e1057c9e0aef6ded8\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"59dcf8864d707979de001b4a7010ddc6be8df655b8454b1e573e15894773a63f\"" Apr 13 19:24:41.983484 containerd[1480]: time="2026-04-13T19:24:41.982566390Z" level=info msg="StartContainer for \"59dcf8864d707979de001b4a7010ddc6be8df655b8454b1e573e15894773a63f\"" Apr 13 19:24:42.029882 systemd[1]: Started cri-containerd-59dcf8864d707979de001b4a7010ddc6be8df655b8454b1e573e15894773a63f.scope - libcontainer container 59dcf8864d707979de001b4a7010ddc6be8df655b8454b1e573e15894773a63f. Apr 13 19:24:42.070489 containerd[1480]: time="2026-04-13T19:24:42.070251957Z" level=info msg="StartContainer for \"59dcf8864d707979de001b4a7010ddc6be8df655b8454b1e573e15894773a63f\" returns successfully" Apr 13 19:24:44.494608 containerd[1480]: time="2026-04-13T19:24:44.494505466Z" level=info msg="StopPodSandbox for \"a5235e47504c513cfa717aa5a80227a4b51127936d8ede6c41a55871982e7b83\"" Apr 13 19:24:44.614083 containerd[1480]: 2026-04-13 19:24:44.571 [WARNING][5284] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a5235e47504c513cfa717aa5a80227a4b51127936d8ede6c41a55871982e7b83" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--e--ee64700b2a-k8s-goldmane--9f7667bb8--224z2-eth0", GenerateName:"goldmane-9f7667bb8-", Namespace:"calico-system", SelfLink:"", UID:"46a789ab-50b2-4393-9d38-e8e911daa169", ResourceVersion:"1075", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 19, 24, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"9f7667bb8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-e-ee64700b2a", ContainerID:"1209f41a936b400080be1f465a29081006a9ce776844f51e1057c9e0aef6ded8", Pod:"goldmane-9f7667bb8-224z2", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.126.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"caliacc1d1c96a0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 19:24:44.614083 containerd[1480]: 2026-04-13 19:24:44.572 [INFO][5284] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="a5235e47504c513cfa717aa5a80227a4b51127936d8ede6c41a55871982e7b83" Apr 13 19:24:44.614083 containerd[1480]: 2026-04-13 19:24:44.572 [INFO][5284] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a5235e47504c513cfa717aa5a80227a4b51127936d8ede6c41a55871982e7b83" iface="eth0" netns="" Apr 13 19:24:44.614083 containerd[1480]: 2026-04-13 19:24:44.572 [INFO][5284] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="a5235e47504c513cfa717aa5a80227a4b51127936d8ede6c41a55871982e7b83" Apr 13 19:24:44.614083 containerd[1480]: 2026-04-13 19:24:44.572 [INFO][5284] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="a5235e47504c513cfa717aa5a80227a4b51127936d8ede6c41a55871982e7b83" Apr 13 19:24:44.614083 containerd[1480]: 2026-04-13 19:24:44.596 [INFO][5292] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="a5235e47504c513cfa717aa5a80227a4b51127936d8ede6c41a55871982e7b83" HandleID="k8s-pod-network.a5235e47504c513cfa717aa5a80227a4b51127936d8ede6c41a55871982e7b83" Workload="ci--4081--3--7--e--ee64700b2a-k8s-goldmane--9f7667bb8--224z2-eth0" Apr 13 19:24:44.614083 containerd[1480]: 2026-04-13 19:24:44.596 [INFO][5292] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 19:24:44.614083 containerd[1480]: 2026-04-13 19:24:44.596 [INFO][5292] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 19:24:44.614083 containerd[1480]: 2026-04-13 19:24:44.607 [WARNING][5292] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="a5235e47504c513cfa717aa5a80227a4b51127936d8ede6c41a55871982e7b83" HandleID="k8s-pod-network.a5235e47504c513cfa717aa5a80227a4b51127936d8ede6c41a55871982e7b83" Workload="ci--4081--3--7--e--ee64700b2a-k8s-goldmane--9f7667bb8--224z2-eth0" Apr 13 19:24:44.614083 containerd[1480]: 2026-04-13 19:24:44.607 [INFO][5292] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="a5235e47504c513cfa717aa5a80227a4b51127936d8ede6c41a55871982e7b83" HandleID="k8s-pod-network.a5235e47504c513cfa717aa5a80227a4b51127936d8ede6c41a55871982e7b83" Workload="ci--4081--3--7--e--ee64700b2a-k8s-goldmane--9f7667bb8--224z2-eth0" Apr 13 19:24:44.614083 containerd[1480]: 2026-04-13 19:24:44.610 [INFO][5292] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 19:24:44.614083 containerd[1480]: 2026-04-13 19:24:44.612 [INFO][5284] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="a5235e47504c513cfa717aa5a80227a4b51127936d8ede6c41a55871982e7b83" Apr 13 19:24:44.614570 containerd[1480]: time="2026-04-13T19:24:44.614133550Z" level=info msg="TearDown network for sandbox \"a5235e47504c513cfa717aa5a80227a4b51127936d8ede6c41a55871982e7b83\" successfully" Apr 13 19:24:44.614570 containerd[1480]: time="2026-04-13T19:24:44.614160590Z" level=info msg="StopPodSandbox for \"a5235e47504c513cfa717aa5a80227a4b51127936d8ede6c41a55871982e7b83\" returns successfully" Apr 13 19:24:44.614921 containerd[1480]: time="2026-04-13T19:24:44.614893908Z" level=info msg="RemovePodSandbox for \"a5235e47504c513cfa717aa5a80227a4b51127936d8ede6c41a55871982e7b83\"" Apr 13 19:24:44.614960 containerd[1480]: time="2026-04-13T19:24:44.614936028Z" level=info msg="Forcibly stopping sandbox \"a5235e47504c513cfa717aa5a80227a4b51127936d8ede6c41a55871982e7b83\"" Apr 13 19:24:44.703618 containerd[1480]: 2026-04-13 19:24:44.662 [WARNING][5307] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a5235e47504c513cfa717aa5a80227a4b51127936d8ede6c41a55871982e7b83" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--e--ee64700b2a-k8s-goldmane--9f7667bb8--224z2-eth0", GenerateName:"goldmane-9f7667bb8-", Namespace:"calico-system", SelfLink:"", UID:"46a789ab-50b2-4393-9d38-e8e911daa169", ResourceVersion:"1075", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 19, 24, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"9f7667bb8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-e-ee64700b2a", ContainerID:"1209f41a936b400080be1f465a29081006a9ce776844f51e1057c9e0aef6ded8", Pod:"goldmane-9f7667bb8-224z2", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.126.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"caliacc1d1c96a0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 19:24:44.703618 containerd[1480]: 2026-04-13 19:24:44.662 [INFO][5307] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="a5235e47504c513cfa717aa5a80227a4b51127936d8ede6c41a55871982e7b83" Apr 13 19:24:44.703618 containerd[1480]: 2026-04-13 19:24:44.662 [INFO][5307] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a5235e47504c513cfa717aa5a80227a4b51127936d8ede6c41a55871982e7b83" iface="eth0" netns="" Apr 13 19:24:44.703618 containerd[1480]: 2026-04-13 19:24:44.662 [INFO][5307] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="a5235e47504c513cfa717aa5a80227a4b51127936d8ede6c41a55871982e7b83" Apr 13 19:24:44.703618 containerd[1480]: 2026-04-13 19:24:44.662 [INFO][5307] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="a5235e47504c513cfa717aa5a80227a4b51127936d8ede6c41a55871982e7b83" Apr 13 19:24:44.703618 containerd[1480]: 2026-04-13 19:24:44.684 [INFO][5314] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="a5235e47504c513cfa717aa5a80227a4b51127936d8ede6c41a55871982e7b83" HandleID="k8s-pod-network.a5235e47504c513cfa717aa5a80227a4b51127936d8ede6c41a55871982e7b83" Workload="ci--4081--3--7--e--ee64700b2a-k8s-goldmane--9f7667bb8--224z2-eth0" Apr 13 19:24:44.703618 containerd[1480]: 2026-04-13 19:24:44.684 [INFO][5314] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 19:24:44.703618 containerd[1480]: 2026-04-13 19:24:44.684 [INFO][5314] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 19:24:44.703618 containerd[1480]: 2026-04-13 19:24:44.696 [WARNING][5314] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="a5235e47504c513cfa717aa5a80227a4b51127936d8ede6c41a55871982e7b83" HandleID="k8s-pod-network.a5235e47504c513cfa717aa5a80227a4b51127936d8ede6c41a55871982e7b83" Workload="ci--4081--3--7--e--ee64700b2a-k8s-goldmane--9f7667bb8--224z2-eth0" Apr 13 19:24:44.703618 containerd[1480]: 2026-04-13 19:24:44.696 [INFO][5314] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="a5235e47504c513cfa717aa5a80227a4b51127936d8ede6c41a55871982e7b83" HandleID="k8s-pod-network.a5235e47504c513cfa717aa5a80227a4b51127936d8ede6c41a55871982e7b83" Workload="ci--4081--3--7--e--ee64700b2a-k8s-goldmane--9f7667bb8--224z2-eth0" Apr 13 19:24:44.703618 containerd[1480]: 2026-04-13 19:24:44.699 [INFO][5314] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 19:24:44.703618 containerd[1480]: 2026-04-13 19:24:44.701 [INFO][5307] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="a5235e47504c513cfa717aa5a80227a4b51127936d8ede6c41a55871982e7b83" Apr 13 19:24:44.704310 containerd[1480]: time="2026-04-13T19:24:44.703689463Z" level=info msg="TearDown network for sandbox \"a5235e47504c513cfa717aa5a80227a4b51127936d8ede6c41a55871982e7b83\" successfully" Apr 13 19:24:44.727549 containerd[1480]: time="2026-04-13T19:24:44.726846650Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a5235e47504c513cfa717aa5a80227a4b51127936d8ede6c41a55871982e7b83\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 13 19:24:44.727549 containerd[1480]: time="2026-04-13T19:24:44.726952529Z" level=info msg="RemovePodSandbox \"a5235e47504c513cfa717aa5a80227a4b51127936d8ede6c41a55871982e7b83\" returns successfully" Apr 13 19:24:44.728356 containerd[1480]: time="2026-04-13T19:24:44.727972527Z" level=info msg="StopPodSandbox for \"fbe5471fc3772a45bda34ae4ec8471df711ca31339c01fb06283fdc5eb911f42\"" Apr 13 19:24:44.822520 containerd[1480]: 2026-04-13 19:24:44.774 [WARNING][5328] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="fbe5471fc3772a45bda34ae4ec8471df711ca31339c01fb06283fdc5eb911f42" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--e--ee64700b2a-k8s-calico--kube--controllers--5d6784dd66--rscc9-eth0", GenerateName:"calico-kube-controllers-5d6784dd66-", Namespace:"calico-system", SelfLink:"", UID:"6839a9ef-85b8-4ddf-8641-0352b0c9dd4c", ResourceVersion:"975", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 19, 24, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5d6784dd66", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-e-ee64700b2a", ContainerID:"aa35a1eec1520c74aa1dd5abc5b0a6ca2d679a42faf0ffc8dc7263c81d93d41e", Pod:"calico-kube-controllers-5d6784dd66-rscc9", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.126.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calied041ec83bc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 19:24:44.822520 containerd[1480]: 2026-04-13 19:24:44.775 [INFO][5328] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="fbe5471fc3772a45bda34ae4ec8471df711ca31339c01fb06283fdc5eb911f42" Apr 13 19:24:44.822520 containerd[1480]: 2026-04-13 19:24:44.775 [INFO][5328] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="fbe5471fc3772a45bda34ae4ec8471df711ca31339c01fb06283fdc5eb911f42" iface="eth0" netns="" Apr 13 19:24:44.822520 containerd[1480]: 2026-04-13 19:24:44.775 [INFO][5328] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="fbe5471fc3772a45bda34ae4ec8471df711ca31339c01fb06283fdc5eb911f42" Apr 13 19:24:44.822520 containerd[1480]: 2026-04-13 19:24:44.775 [INFO][5328] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="fbe5471fc3772a45bda34ae4ec8471df711ca31339c01fb06283fdc5eb911f42" Apr 13 19:24:44.822520 containerd[1480]: 2026-04-13 19:24:44.798 [INFO][5335] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="fbe5471fc3772a45bda34ae4ec8471df711ca31339c01fb06283fdc5eb911f42" HandleID="k8s-pod-network.fbe5471fc3772a45bda34ae4ec8471df711ca31339c01fb06283fdc5eb911f42" Workload="ci--4081--3--7--e--ee64700b2a-k8s-calico--kube--controllers--5d6784dd66--rscc9-eth0" Apr 13 19:24:44.822520 containerd[1480]: 2026-04-13 19:24:44.798 [INFO][5335] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 19:24:44.822520 containerd[1480]: 2026-04-13 19:24:44.798 [INFO][5335] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 19:24:44.822520 containerd[1480]: 2026-04-13 19:24:44.811 [WARNING][5335] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="fbe5471fc3772a45bda34ae4ec8471df711ca31339c01fb06283fdc5eb911f42" HandleID="k8s-pod-network.fbe5471fc3772a45bda34ae4ec8471df711ca31339c01fb06283fdc5eb911f42" Workload="ci--4081--3--7--e--ee64700b2a-k8s-calico--kube--controllers--5d6784dd66--rscc9-eth0" Apr 13 19:24:44.822520 containerd[1480]: 2026-04-13 19:24:44.811 [INFO][5335] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="fbe5471fc3772a45bda34ae4ec8471df711ca31339c01fb06283fdc5eb911f42" HandleID="k8s-pod-network.fbe5471fc3772a45bda34ae4ec8471df711ca31339c01fb06283fdc5eb911f42" Workload="ci--4081--3--7--e--ee64700b2a-k8s-calico--kube--controllers--5d6784dd66--rscc9-eth0" Apr 13 19:24:44.822520 containerd[1480]: 2026-04-13 19:24:44.813 [INFO][5335] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 19:24:44.822520 containerd[1480]: 2026-04-13 19:24:44.817 [INFO][5328] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="fbe5471fc3772a45bda34ae4ec8471df711ca31339c01fb06283fdc5eb911f42" Apr 13 19:24:44.822520 containerd[1480]: time="2026-04-13T19:24:44.822219069Z" level=info msg="TearDown network for sandbox \"fbe5471fc3772a45bda34ae4ec8471df711ca31339c01fb06283fdc5eb911f42\" successfully" Apr 13 19:24:44.822520 containerd[1480]: time="2026-04-13T19:24:44.822254989Z" level=info msg="StopPodSandbox for \"fbe5471fc3772a45bda34ae4ec8471df711ca31339c01fb06283fdc5eb911f42\" returns successfully" Apr 13 19:24:44.824806 containerd[1480]: time="2026-04-13T19:24:44.824736744Z" level=info msg="RemovePodSandbox for \"fbe5471fc3772a45bda34ae4ec8471df711ca31339c01fb06283fdc5eb911f42\"" Apr 13 19:24:44.824806 containerd[1480]: time="2026-04-13T19:24:44.824795383Z" level=info msg="Forcibly stopping sandbox \"fbe5471fc3772a45bda34ae4ec8471df711ca31339c01fb06283fdc5eb911f42\"" Apr 13 19:24:44.917766 systemd[1]: run-containerd-runc-k8s.io-59dcf8864d707979de001b4a7010ddc6be8df655b8454b1e573e15894773a63f-runc.9x2wWY.mount: Deactivated successfully. Apr 13 19:24:44.932312 containerd[1480]: 2026-04-13 19:24:44.874 [WARNING][5349] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="fbe5471fc3772a45bda34ae4ec8471df711ca31339c01fb06283fdc5eb911f42" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--e--ee64700b2a-k8s-calico--kube--controllers--5d6784dd66--rscc9-eth0", GenerateName:"calico-kube-controllers-5d6784dd66-", Namespace:"calico-system", SelfLink:"", UID:"6839a9ef-85b8-4ddf-8641-0352b0c9dd4c", ResourceVersion:"975", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 19, 24, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5d6784dd66", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-e-ee64700b2a", ContainerID:"aa35a1eec1520c74aa1dd5abc5b0a6ca2d679a42faf0ffc8dc7263c81d93d41e", Pod:"calico-kube-controllers-5d6784dd66-rscc9", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.126.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calied041ec83bc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 19:24:44.932312 containerd[1480]: 2026-04-13 19:24:44.874 [INFO][5349] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="fbe5471fc3772a45bda34ae4ec8471df711ca31339c01fb06283fdc5eb911f42" Apr 13 19:24:44.932312 containerd[1480]: 2026-04-13 19:24:44.874 [INFO][5349] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="fbe5471fc3772a45bda34ae4ec8471df711ca31339c01fb06283fdc5eb911f42" iface="eth0" netns="" Apr 13 19:24:44.932312 containerd[1480]: 2026-04-13 19:24:44.874 [INFO][5349] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="fbe5471fc3772a45bda34ae4ec8471df711ca31339c01fb06283fdc5eb911f42" Apr 13 19:24:44.932312 containerd[1480]: 2026-04-13 19:24:44.874 [INFO][5349] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="fbe5471fc3772a45bda34ae4ec8471df711ca31339c01fb06283fdc5eb911f42" Apr 13 19:24:44.932312 containerd[1480]: 2026-04-13 19:24:44.906 [INFO][5356] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="fbe5471fc3772a45bda34ae4ec8471df711ca31339c01fb06283fdc5eb911f42" HandleID="k8s-pod-network.fbe5471fc3772a45bda34ae4ec8471df711ca31339c01fb06283fdc5eb911f42" Workload="ci--4081--3--7--e--ee64700b2a-k8s-calico--kube--controllers--5d6784dd66--rscc9-eth0" Apr 13 19:24:44.932312 containerd[1480]: 2026-04-13 19:24:44.908 [INFO][5356] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 19:24:44.932312 containerd[1480]: 2026-04-13 19:24:44.908 [INFO][5356] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 19:24:44.932312 containerd[1480]: 2026-04-13 19:24:44.924 [WARNING][5356] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="fbe5471fc3772a45bda34ae4ec8471df711ca31339c01fb06283fdc5eb911f42" HandleID="k8s-pod-network.fbe5471fc3772a45bda34ae4ec8471df711ca31339c01fb06283fdc5eb911f42" Workload="ci--4081--3--7--e--ee64700b2a-k8s-calico--kube--controllers--5d6784dd66--rscc9-eth0" Apr 13 19:24:44.932312 containerd[1480]: 2026-04-13 19:24:44.924 [INFO][5356] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="fbe5471fc3772a45bda34ae4ec8471df711ca31339c01fb06283fdc5eb911f42" HandleID="k8s-pod-network.fbe5471fc3772a45bda34ae4ec8471df711ca31339c01fb06283fdc5eb911f42" Workload="ci--4081--3--7--e--ee64700b2a-k8s-calico--kube--controllers--5d6784dd66--rscc9-eth0" Apr 13 19:24:44.932312 containerd[1480]: 2026-04-13 19:24:44.927 [INFO][5356] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 19:24:44.932312 containerd[1480]: 2026-04-13 19:24:44.929 [INFO][5349] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="fbe5471fc3772a45bda34ae4ec8471df711ca31339c01fb06283fdc5eb911f42" Apr 13 19:24:44.932931 containerd[1480]: time="2026-04-13T19:24:44.932393575Z" level=info msg="TearDown network for sandbox \"fbe5471fc3772a45bda34ae4ec8471df711ca31339c01fb06283fdc5eb911f42\" successfully" Apr 13 19:24:44.936855 containerd[1480]: time="2026-04-13T19:24:44.936796405Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"fbe5471fc3772a45bda34ae4ec8471df711ca31339c01fb06283fdc5eb911f42\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 13 19:24:44.937199 containerd[1480]: time="2026-04-13T19:24:44.936884365Z" level=info msg="RemovePodSandbox \"fbe5471fc3772a45bda34ae4ec8471df711ca31339c01fb06283fdc5eb911f42\" returns successfully" Apr 13 19:24:44.938016 containerd[1480]: time="2026-04-13T19:24:44.937684523Z" level=info msg="StopPodSandbox for \"7d0f2d09e14532008ee44c4a7320d9bdbb5bd149bb257763a59e759ee437cb98\"" Apr 13 19:24:45.045537 containerd[1480]: 2026-04-13 19:24:44.998 [WARNING][5389] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="7d0f2d09e14532008ee44c4a7320d9bdbb5bd149bb257763a59e759ee437cb98" WorkloadEndpoint="ci--4081--3--7--e--ee64700b2a-k8s-whisker--7cd4c54cd5--vb8pw-eth0" Apr 13 19:24:45.045537 containerd[1480]: 2026-04-13 19:24:44.998 [INFO][5389] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="7d0f2d09e14532008ee44c4a7320d9bdbb5bd149bb257763a59e759ee437cb98" Apr 13 19:24:45.045537 containerd[1480]: 2026-04-13 19:24:44.998 [INFO][5389] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7d0f2d09e14532008ee44c4a7320d9bdbb5bd149bb257763a59e759ee437cb98" iface="eth0" netns="" Apr 13 19:24:45.045537 containerd[1480]: 2026-04-13 19:24:44.998 [INFO][5389] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="7d0f2d09e14532008ee44c4a7320d9bdbb5bd149bb257763a59e759ee437cb98" Apr 13 19:24:45.045537 containerd[1480]: 2026-04-13 19:24:44.998 [INFO][5389] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="7d0f2d09e14532008ee44c4a7320d9bdbb5bd149bb257763a59e759ee437cb98" Apr 13 19:24:45.045537 containerd[1480]: 2026-04-13 19:24:45.025 [INFO][5398] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="7d0f2d09e14532008ee44c4a7320d9bdbb5bd149bb257763a59e759ee437cb98" HandleID="k8s-pod-network.7d0f2d09e14532008ee44c4a7320d9bdbb5bd149bb257763a59e759ee437cb98" Workload="ci--4081--3--7--e--ee64700b2a-k8s-whisker--7cd4c54cd5--vb8pw-eth0" Apr 13 19:24:45.045537 containerd[1480]: 2026-04-13 19:24:45.026 [INFO][5398] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 19:24:45.045537 containerd[1480]: 2026-04-13 19:24:45.026 [INFO][5398] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 19:24:45.045537 containerd[1480]: 2026-04-13 19:24:45.038 [WARNING][5398] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="7d0f2d09e14532008ee44c4a7320d9bdbb5bd149bb257763a59e759ee437cb98" HandleID="k8s-pod-network.7d0f2d09e14532008ee44c4a7320d9bdbb5bd149bb257763a59e759ee437cb98" Workload="ci--4081--3--7--e--ee64700b2a-k8s-whisker--7cd4c54cd5--vb8pw-eth0" Apr 13 19:24:45.045537 containerd[1480]: 2026-04-13 19:24:45.038 [INFO][5398] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="7d0f2d09e14532008ee44c4a7320d9bdbb5bd149bb257763a59e759ee437cb98" HandleID="k8s-pod-network.7d0f2d09e14532008ee44c4a7320d9bdbb5bd149bb257763a59e759ee437cb98" Workload="ci--4081--3--7--e--ee64700b2a-k8s-whisker--7cd4c54cd5--vb8pw-eth0" Apr 13 19:24:45.045537 containerd[1480]: 2026-04-13 19:24:45.040 [INFO][5398] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 19:24:45.045537 containerd[1480]: 2026-04-13 19:24:45.041 [INFO][5389] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="7d0f2d09e14532008ee44c4a7320d9bdbb5bd149bb257763a59e759ee437cb98" Apr 13 19:24:45.047397 containerd[1480]: time="2026-04-13T19:24:45.046008999Z" level=info msg="TearDown network for sandbox \"7d0f2d09e14532008ee44c4a7320d9bdbb5bd149bb257763a59e759ee437cb98\" successfully" Apr 13 19:24:45.047397 containerd[1480]: time="2026-04-13T19:24:45.046061039Z" level=info msg="StopPodSandbox for \"7d0f2d09e14532008ee44c4a7320d9bdbb5bd149bb257763a59e759ee437cb98\" returns successfully" Apr 13 19:24:45.047397 containerd[1480]: time="2026-04-13T19:24:45.047001677Z" level=info msg="RemovePodSandbox for \"7d0f2d09e14532008ee44c4a7320d9bdbb5bd149bb257763a59e759ee437cb98\"" Apr 13 19:24:45.047397 containerd[1480]: time="2026-04-13T19:24:45.047035077Z" level=info msg="Forcibly stopping sandbox \"7d0f2d09e14532008ee44c4a7320d9bdbb5bd149bb257763a59e759ee437cb98\"" Apr 13 19:24:45.142071 containerd[1480]: 2026-04-13 19:24:45.093 [WARNING][5412] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="7d0f2d09e14532008ee44c4a7320d9bdbb5bd149bb257763a59e759ee437cb98" WorkloadEndpoint="ci--4081--3--7--e--ee64700b2a-k8s-whisker--7cd4c54cd5--vb8pw-eth0" Apr 13 19:24:45.142071 containerd[1480]: 2026-04-13 19:24:45.094 [INFO][5412] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="7d0f2d09e14532008ee44c4a7320d9bdbb5bd149bb257763a59e759ee437cb98" Apr 13 19:24:45.142071 containerd[1480]: 2026-04-13 19:24:45.094 [INFO][5412] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7d0f2d09e14532008ee44c4a7320d9bdbb5bd149bb257763a59e759ee437cb98" iface="eth0" netns="" Apr 13 19:24:45.142071 containerd[1480]: 2026-04-13 19:24:45.094 [INFO][5412] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="7d0f2d09e14532008ee44c4a7320d9bdbb5bd149bb257763a59e759ee437cb98" Apr 13 19:24:45.142071 containerd[1480]: 2026-04-13 19:24:45.094 [INFO][5412] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="7d0f2d09e14532008ee44c4a7320d9bdbb5bd149bb257763a59e759ee437cb98" Apr 13 19:24:45.142071 containerd[1480]: 2026-04-13 19:24:45.118 [INFO][5420] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="7d0f2d09e14532008ee44c4a7320d9bdbb5bd149bb257763a59e759ee437cb98" HandleID="k8s-pod-network.7d0f2d09e14532008ee44c4a7320d9bdbb5bd149bb257763a59e759ee437cb98" Workload="ci--4081--3--7--e--ee64700b2a-k8s-whisker--7cd4c54cd5--vb8pw-eth0" Apr 13 19:24:45.142071 containerd[1480]: 2026-04-13 19:24:45.119 [INFO][5420] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 19:24:45.142071 containerd[1480]: 2026-04-13 19:24:45.119 [INFO][5420] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 19:24:45.142071 containerd[1480]: 2026-04-13 19:24:45.135 [WARNING][5420] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="7d0f2d09e14532008ee44c4a7320d9bdbb5bd149bb257763a59e759ee437cb98" HandleID="k8s-pod-network.7d0f2d09e14532008ee44c4a7320d9bdbb5bd149bb257763a59e759ee437cb98" Workload="ci--4081--3--7--e--ee64700b2a-k8s-whisker--7cd4c54cd5--vb8pw-eth0" Apr 13 19:24:45.142071 containerd[1480]: 2026-04-13 19:24:45.135 [INFO][5420] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="7d0f2d09e14532008ee44c4a7320d9bdbb5bd149bb257763a59e759ee437cb98" HandleID="k8s-pod-network.7d0f2d09e14532008ee44c4a7320d9bdbb5bd149bb257763a59e759ee437cb98" Workload="ci--4081--3--7--e--ee64700b2a-k8s-whisker--7cd4c54cd5--vb8pw-eth0" Apr 13 19:24:45.142071 containerd[1480]: 2026-04-13 19:24:45.138 [INFO][5420] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 19:24:45.142071 containerd[1480]: 2026-04-13 19:24:45.140 [INFO][5412] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="7d0f2d09e14532008ee44c4a7320d9bdbb5bd149bb257763a59e759ee437cb98" Apr 13 19:24:45.142771 containerd[1480]: time="2026-04-13T19:24:45.142111991Z" level=info msg="TearDown network for sandbox \"7d0f2d09e14532008ee44c4a7320d9bdbb5bd149bb257763a59e759ee437cb98\" successfully" Apr 13 19:24:45.146926 containerd[1480]: time="2026-04-13T19:24:45.146872661Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7d0f2d09e14532008ee44c4a7320d9bdbb5bd149bb257763a59e759ee437cb98\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 13 19:24:45.147076 containerd[1480]: time="2026-04-13T19:24:45.146997260Z" level=info msg="RemovePodSandbox \"7d0f2d09e14532008ee44c4a7320d9bdbb5bd149bb257763a59e759ee437cb98\" returns successfully" Apr 13 19:24:45.147641 containerd[1480]: time="2026-04-13T19:24:45.147600899Z" level=info msg="StopPodSandbox for \"561f945af7426abfc16d5ccc5c2a5786df8b22dc85386ae68c08c2b430fa2f77\"" Apr 13 19:24:45.271188 containerd[1480]: 2026-04-13 19:24:45.198 [WARNING][5434] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="561f945af7426abfc16d5ccc5c2a5786df8b22dc85386ae68c08c2b430fa2f77" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--e--ee64700b2a-k8s-coredns--7d764666f9--ngg5x-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"69279b5d-27a1-4477-a9f1-aa1c68505c9c", ResourceVersion:"1023", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 19, 23, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-e-ee64700b2a", ContainerID:"1d1c6c36359401ddc1f0444ad739f75ef9116d2104209a0f62b62da672514489", Pod:"coredns-7d764666f9-ngg5x", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.126.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6c54e05b757", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 19:24:45.271188 containerd[1480]: 2026-04-13 19:24:45.200 [INFO][5434] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="561f945af7426abfc16d5ccc5c2a5786df8b22dc85386ae68c08c2b430fa2f77" Apr 13 19:24:45.271188 containerd[1480]: 2026-04-13 19:24:45.200 [INFO][5434] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="561f945af7426abfc16d5ccc5c2a5786df8b22dc85386ae68c08c2b430fa2f77" iface="eth0" netns="" Apr 13 19:24:45.271188 containerd[1480]: 2026-04-13 19:24:45.200 [INFO][5434] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="561f945af7426abfc16d5ccc5c2a5786df8b22dc85386ae68c08c2b430fa2f77" Apr 13 19:24:45.271188 containerd[1480]: 2026-04-13 19:24:45.200 [INFO][5434] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="561f945af7426abfc16d5ccc5c2a5786df8b22dc85386ae68c08c2b430fa2f77" Apr 13 19:24:45.271188 containerd[1480]: 2026-04-13 19:24:45.249 [INFO][5441] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="561f945af7426abfc16d5ccc5c2a5786df8b22dc85386ae68c08c2b430fa2f77" HandleID="k8s-pod-network.561f945af7426abfc16d5ccc5c2a5786df8b22dc85386ae68c08c2b430fa2f77" Workload="ci--4081--3--7--e--ee64700b2a-k8s-coredns--7d764666f9--ngg5x-eth0" Apr 13 19:24:45.271188 containerd[1480]: 2026-04-13 19:24:45.249 [INFO][5441] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 19:24:45.271188 containerd[1480]: 2026-04-13 19:24:45.249 [INFO][5441] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 19:24:45.271188 containerd[1480]: 2026-04-13 19:24:45.263 [WARNING][5441] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="561f945af7426abfc16d5ccc5c2a5786df8b22dc85386ae68c08c2b430fa2f77" HandleID="k8s-pod-network.561f945af7426abfc16d5ccc5c2a5786df8b22dc85386ae68c08c2b430fa2f77" Workload="ci--4081--3--7--e--ee64700b2a-k8s-coredns--7d764666f9--ngg5x-eth0" Apr 13 19:24:45.271188 containerd[1480]: 2026-04-13 19:24:45.263 [INFO][5441] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="561f945af7426abfc16d5ccc5c2a5786df8b22dc85386ae68c08c2b430fa2f77" HandleID="k8s-pod-network.561f945af7426abfc16d5ccc5c2a5786df8b22dc85386ae68c08c2b430fa2f77" Workload="ci--4081--3--7--e--ee64700b2a-k8s-coredns--7d764666f9--ngg5x-eth0" Apr 13 19:24:45.271188 containerd[1480]: 2026-04-13 19:24:45.266 [INFO][5441] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 19:24:45.271188 containerd[1480]: 2026-04-13 19:24:45.268 [INFO][5434] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="561f945af7426abfc16d5ccc5c2a5786df8b22dc85386ae68c08c2b430fa2f77" Apr 13 19:24:45.272749 containerd[1480]: time="2026-04-13T19:24:45.271218991Z" level=info msg="TearDown network for sandbox \"561f945af7426abfc16d5ccc5c2a5786df8b22dc85386ae68c08c2b430fa2f77\" successfully" Apr 13 19:24:45.272749 containerd[1480]: time="2026-04-13T19:24:45.271256471Z" level=info msg="StopPodSandbox for \"561f945af7426abfc16d5ccc5c2a5786df8b22dc85386ae68c08c2b430fa2f77\" returns successfully" Apr 13 19:24:45.272749 containerd[1480]: time="2026-04-13T19:24:45.272031869Z" level=info msg="RemovePodSandbox for \"561f945af7426abfc16d5ccc5c2a5786df8b22dc85386ae68c08c2b430fa2f77\"" Apr 13 19:24:45.272749 containerd[1480]: time="2026-04-13T19:24:45.272074709Z" level=info msg="Forcibly stopping sandbox \"561f945af7426abfc16d5ccc5c2a5786df8b22dc85386ae68c08c2b430fa2f77\"" Apr 13 19:24:45.364717 containerd[1480]: 2026-04-13 19:24:45.319 [WARNING][5457] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="561f945af7426abfc16d5ccc5c2a5786df8b22dc85386ae68c08c2b430fa2f77" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--e--ee64700b2a-k8s-coredns--7d764666f9--ngg5x-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"69279b5d-27a1-4477-a9f1-aa1c68505c9c", ResourceVersion:"1023", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 19, 23, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-e-ee64700b2a", ContainerID:"1d1c6c36359401ddc1f0444ad739f75ef9116d2104209a0f62b62da672514489", Pod:"coredns-7d764666f9-ngg5x", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.126.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6c54e05b757", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 19:24:45.364717 containerd[1480]: 2026-04-13 19:24:45.321 [INFO][5457] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="561f945af7426abfc16d5ccc5c2a5786df8b22dc85386ae68c08c2b430fa2f77" Apr 13 19:24:45.364717 containerd[1480]: 2026-04-13 19:24:45.321 [INFO][5457] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="561f945af7426abfc16d5ccc5c2a5786df8b22dc85386ae68c08c2b430fa2f77" iface="eth0" netns="" Apr 13 19:24:45.364717 containerd[1480]: 2026-04-13 19:24:45.321 [INFO][5457] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="561f945af7426abfc16d5ccc5c2a5786df8b22dc85386ae68c08c2b430fa2f77" Apr 13 19:24:45.364717 containerd[1480]: 2026-04-13 19:24:45.321 [INFO][5457] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="561f945af7426abfc16d5ccc5c2a5786df8b22dc85386ae68c08c2b430fa2f77" Apr 13 19:24:45.364717 containerd[1480]: 2026-04-13 19:24:45.342 [INFO][5464] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="561f945af7426abfc16d5ccc5c2a5786df8b22dc85386ae68c08c2b430fa2f77" HandleID="k8s-pod-network.561f945af7426abfc16d5ccc5c2a5786df8b22dc85386ae68c08c2b430fa2f77" Workload="ci--4081--3--7--e--ee64700b2a-k8s-coredns--7d764666f9--ngg5x-eth0" Apr 13 19:24:45.364717 containerd[1480]: 2026-04-13 19:24:45.342 [INFO][5464] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 19:24:45.364717 containerd[1480]: 2026-04-13 19:24:45.342 [INFO][5464] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 19:24:45.364717 containerd[1480]: 2026-04-13 19:24:45.358 [WARNING][5464] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="561f945af7426abfc16d5ccc5c2a5786df8b22dc85386ae68c08c2b430fa2f77" HandleID="k8s-pod-network.561f945af7426abfc16d5ccc5c2a5786df8b22dc85386ae68c08c2b430fa2f77" Workload="ci--4081--3--7--e--ee64700b2a-k8s-coredns--7d764666f9--ngg5x-eth0" Apr 13 19:24:45.364717 containerd[1480]: 2026-04-13 19:24:45.358 [INFO][5464] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="561f945af7426abfc16d5ccc5c2a5786df8b22dc85386ae68c08c2b430fa2f77" HandleID="k8s-pod-network.561f945af7426abfc16d5ccc5c2a5786df8b22dc85386ae68c08c2b430fa2f77" Workload="ci--4081--3--7--e--ee64700b2a-k8s-coredns--7d764666f9--ngg5x-eth0" Apr 13 19:24:45.364717 containerd[1480]: 2026-04-13 19:24:45.360 [INFO][5464] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 19:24:45.364717 containerd[1480]: 2026-04-13 19:24:45.362 [INFO][5457] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="561f945af7426abfc16d5ccc5c2a5786df8b22dc85386ae68c08c2b430fa2f77" Apr 13 19:24:45.365145 containerd[1480]: time="2026-04-13T19:24:45.364762269Z" level=info msg="TearDown network for sandbox \"561f945af7426abfc16d5ccc5c2a5786df8b22dc85386ae68c08c2b430fa2f77\" successfully" Apr 13 19:24:45.368902 containerd[1480]: time="2026-04-13T19:24:45.368841180Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"561f945af7426abfc16d5ccc5c2a5786df8b22dc85386ae68c08c2b430fa2f77\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 13 19:24:45.369008 containerd[1480]: time="2026-04-13T19:24:45.368967860Z" level=info msg="RemovePodSandbox \"561f945af7426abfc16d5ccc5c2a5786df8b22dc85386ae68c08c2b430fa2f77\" returns successfully" Apr 13 19:24:45.369640 containerd[1480]: time="2026-04-13T19:24:45.369613738Z" level=info msg="StopPodSandbox for \"8060aebaa14993bbca32c6fd4d64719cf66229be1c4e4908e5327028d6c23f64\"" Apr 13 19:24:45.457667 containerd[1480]: 2026-04-13 19:24:45.410 [WARNING][5478] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8060aebaa14993bbca32c6fd4d64719cf66229be1c4e4908e5327028d6c23f64" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--e--ee64700b2a-k8s-calico--apiserver--dcfc9864--vmpn9-eth0", GenerateName:"calico-apiserver-dcfc9864-", Namespace:"calico-system", SelfLink:"", UID:"a434ed07-82db-484c-93a0-fd9b39b96e37", ResourceVersion:"1063", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 19, 24, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"dcfc9864", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-e-ee64700b2a", ContainerID:"96a08a2458981ef5e38e297c7a2d5639673da1a6537148cd82167979af55cb08", Pod:"calico-apiserver-dcfc9864-vmpn9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.126.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calie174883dcbe", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 19:24:45.457667 containerd[1480]: 2026-04-13 19:24:45.410 [INFO][5478] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="8060aebaa14993bbca32c6fd4d64719cf66229be1c4e4908e5327028d6c23f64" Apr 13 19:24:45.457667 containerd[1480]: 2026-04-13 19:24:45.410 [INFO][5478] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8060aebaa14993bbca32c6fd4d64719cf66229be1c4e4908e5327028d6c23f64" iface="eth0" netns="" Apr 13 19:24:45.457667 containerd[1480]: 2026-04-13 19:24:45.410 [INFO][5478] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="8060aebaa14993bbca32c6fd4d64719cf66229be1c4e4908e5327028d6c23f64" Apr 13 19:24:45.457667 containerd[1480]: 2026-04-13 19:24:45.410 [INFO][5478] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="8060aebaa14993bbca32c6fd4d64719cf66229be1c4e4908e5327028d6c23f64" Apr 13 19:24:45.457667 containerd[1480]: 2026-04-13 19:24:45.437 [INFO][5485] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="8060aebaa14993bbca32c6fd4d64719cf66229be1c4e4908e5327028d6c23f64" HandleID="k8s-pod-network.8060aebaa14993bbca32c6fd4d64719cf66229be1c4e4908e5327028d6c23f64" Workload="ci--4081--3--7--e--ee64700b2a-k8s-calico--apiserver--dcfc9864--vmpn9-eth0" Apr 13 19:24:45.457667 containerd[1480]: 2026-04-13 19:24:45.438 [INFO][5485] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 19:24:45.457667 containerd[1480]: 2026-04-13 19:24:45.438 [INFO][5485] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 19:24:45.457667 containerd[1480]: 2026-04-13 19:24:45.450 [WARNING][5485] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="8060aebaa14993bbca32c6fd4d64719cf66229be1c4e4908e5327028d6c23f64" HandleID="k8s-pod-network.8060aebaa14993bbca32c6fd4d64719cf66229be1c4e4908e5327028d6c23f64" Workload="ci--4081--3--7--e--ee64700b2a-k8s-calico--apiserver--dcfc9864--vmpn9-eth0" Apr 13 19:24:45.457667 containerd[1480]: 2026-04-13 19:24:45.450 [INFO][5485] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="8060aebaa14993bbca32c6fd4d64719cf66229be1c4e4908e5327028d6c23f64" HandleID="k8s-pod-network.8060aebaa14993bbca32c6fd4d64719cf66229be1c4e4908e5327028d6c23f64" Workload="ci--4081--3--7--e--ee64700b2a-k8s-calico--apiserver--dcfc9864--vmpn9-eth0" Apr 13 19:24:45.457667 containerd[1480]: 2026-04-13 19:24:45.453 [INFO][5485] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 19:24:45.457667 containerd[1480]: 2026-04-13 19:24:45.455 [INFO][5478] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="8060aebaa14993bbca32c6fd4d64719cf66229be1c4e4908e5327028d6c23f64" Apr 13 19:24:45.457667 containerd[1480]: time="2026-04-13T19:24:45.457567988Z" level=info msg="TearDown network for sandbox \"8060aebaa14993bbca32c6fd4d64719cf66229be1c4e4908e5327028d6c23f64\" successfully" Apr 13 19:24:45.457667 containerd[1480]: time="2026-04-13T19:24:45.457604388Z" level=info msg="StopPodSandbox for \"8060aebaa14993bbca32c6fd4d64719cf66229be1c4e4908e5327028d6c23f64\" returns successfully" Apr 13 19:24:45.460802 containerd[1480]: time="2026-04-13T19:24:45.460740741Z" level=info msg="RemovePodSandbox for \"8060aebaa14993bbca32c6fd4d64719cf66229be1c4e4908e5327028d6c23f64\"" Apr 13 19:24:45.460946 containerd[1480]: time="2026-04-13T19:24:45.460810901Z" level=info msg="Forcibly stopping sandbox \"8060aebaa14993bbca32c6fd4d64719cf66229be1c4e4908e5327028d6c23f64\"" Apr 13 19:24:45.545528 containerd[1480]: 2026-04-13 19:24:45.501 [WARNING][5500] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8060aebaa14993bbca32c6fd4d64719cf66229be1c4e4908e5327028d6c23f64" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--e--ee64700b2a-k8s-calico--apiserver--dcfc9864--vmpn9-eth0", GenerateName:"calico-apiserver-dcfc9864-", Namespace:"calico-system", SelfLink:"", UID:"a434ed07-82db-484c-93a0-fd9b39b96e37", ResourceVersion:"1063", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 19, 24, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"dcfc9864", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-e-ee64700b2a", ContainerID:"96a08a2458981ef5e38e297c7a2d5639673da1a6537148cd82167979af55cb08", Pod:"calico-apiserver-dcfc9864-vmpn9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.126.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calie174883dcbe", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 19:24:45.545528 containerd[1480]: 2026-04-13 19:24:45.502 [INFO][5500] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="8060aebaa14993bbca32c6fd4d64719cf66229be1c4e4908e5327028d6c23f64" Apr 13 19:24:45.545528 containerd[1480]: 2026-04-13 19:24:45.502 [INFO][5500] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8060aebaa14993bbca32c6fd4d64719cf66229be1c4e4908e5327028d6c23f64" iface="eth0" netns="" Apr 13 19:24:45.545528 containerd[1480]: 2026-04-13 19:24:45.502 [INFO][5500] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="8060aebaa14993bbca32c6fd4d64719cf66229be1c4e4908e5327028d6c23f64" Apr 13 19:24:45.545528 containerd[1480]: 2026-04-13 19:24:45.502 [INFO][5500] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="8060aebaa14993bbca32c6fd4d64719cf66229be1c4e4908e5327028d6c23f64" Apr 13 19:24:45.545528 containerd[1480]: 2026-04-13 19:24:45.525 [INFO][5507] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="8060aebaa14993bbca32c6fd4d64719cf66229be1c4e4908e5327028d6c23f64" HandleID="k8s-pod-network.8060aebaa14993bbca32c6fd4d64719cf66229be1c4e4908e5327028d6c23f64" Workload="ci--4081--3--7--e--ee64700b2a-k8s-calico--apiserver--dcfc9864--vmpn9-eth0" Apr 13 19:24:45.545528 containerd[1480]: 2026-04-13 19:24:45.525 [INFO][5507] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 19:24:45.545528 containerd[1480]: 2026-04-13 19:24:45.525 [INFO][5507] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 19:24:45.545528 containerd[1480]: 2026-04-13 19:24:45.538 [WARNING][5507] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="8060aebaa14993bbca32c6fd4d64719cf66229be1c4e4908e5327028d6c23f64" HandleID="k8s-pod-network.8060aebaa14993bbca32c6fd4d64719cf66229be1c4e4908e5327028d6c23f64" Workload="ci--4081--3--7--e--ee64700b2a-k8s-calico--apiserver--dcfc9864--vmpn9-eth0" Apr 13 19:24:45.545528 containerd[1480]: 2026-04-13 19:24:45.538 [INFO][5507] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="8060aebaa14993bbca32c6fd4d64719cf66229be1c4e4908e5327028d6c23f64" HandleID="k8s-pod-network.8060aebaa14993bbca32c6fd4d64719cf66229be1c4e4908e5327028d6c23f64" Workload="ci--4081--3--7--e--ee64700b2a-k8s-calico--apiserver--dcfc9864--vmpn9-eth0" Apr 13 19:24:45.545528 containerd[1480]: 2026-04-13 19:24:45.541 [INFO][5507] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 19:24:45.545528 containerd[1480]: 2026-04-13 19:24:45.543 [INFO][5500] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="8060aebaa14993bbca32c6fd4d64719cf66229be1c4e4908e5327028d6c23f64" Apr 13 19:24:45.546202 containerd[1480]: time="2026-04-13T19:24:45.545582117Z" level=info msg="TearDown network for sandbox \"8060aebaa14993bbca32c6fd4d64719cf66229be1c4e4908e5327028d6c23f64\" successfully" Apr 13 19:24:45.550114 containerd[1480]: time="2026-04-13T19:24:45.550063187Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8060aebaa14993bbca32c6fd4d64719cf66229be1c4e4908e5327028d6c23f64\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 13 19:24:45.550931 containerd[1480]: time="2026-04-13T19:24:45.550158107Z" level=info msg="RemovePodSandbox \"8060aebaa14993bbca32c6fd4d64719cf66229be1c4e4908e5327028d6c23f64\" returns successfully" Apr 13 19:24:45.551504 containerd[1480]: time="2026-04-13T19:24:45.551106425Z" level=info msg="StopPodSandbox for \"69c136b4b471d49f60e5e8165d7e298a00591f3210c700c53fd9c0c7efd6ee86\"" Apr 13 19:24:45.639250 containerd[1480]: 2026-04-13 19:24:45.593 [WARNING][5521] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="69c136b4b471d49f60e5e8165d7e298a00591f3210c700c53fd9c0c7efd6ee86" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--e--ee64700b2a-k8s-calico--apiserver--dcfc9864--8nhqr-eth0", GenerateName:"calico-apiserver-dcfc9864-", Namespace:"calico-system", SelfLink:"", UID:"b2a4adae-1965-4b2d-8d19-84656dfbab7d", ResourceVersion:"986", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 19, 24, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"dcfc9864", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-e-ee64700b2a", ContainerID:"f5929a3d881c479ea7e6c922b34ea382f35151a2dde6f70ae22962d85d5af9c7", Pod:"calico-apiserver-dcfc9864-8nhqr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.126.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali94161c4eaef", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 19:24:45.639250 containerd[1480]: 2026-04-13 19:24:45.594 [INFO][5521] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="69c136b4b471d49f60e5e8165d7e298a00591f3210c700c53fd9c0c7efd6ee86" Apr 13 19:24:45.639250 containerd[1480]: 2026-04-13 19:24:45.594 [INFO][5521] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="69c136b4b471d49f60e5e8165d7e298a00591f3210c700c53fd9c0c7efd6ee86" iface="eth0" netns="" Apr 13 19:24:45.639250 containerd[1480]: 2026-04-13 19:24:45.594 [INFO][5521] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="69c136b4b471d49f60e5e8165d7e298a00591f3210c700c53fd9c0c7efd6ee86" Apr 13 19:24:45.639250 containerd[1480]: 2026-04-13 19:24:45.594 [INFO][5521] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="69c136b4b471d49f60e5e8165d7e298a00591f3210c700c53fd9c0c7efd6ee86" Apr 13 19:24:45.639250 containerd[1480]: 2026-04-13 19:24:45.620 [INFO][5528] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="69c136b4b471d49f60e5e8165d7e298a00591f3210c700c53fd9c0c7efd6ee86" HandleID="k8s-pod-network.69c136b4b471d49f60e5e8165d7e298a00591f3210c700c53fd9c0c7efd6ee86" Workload="ci--4081--3--7--e--ee64700b2a-k8s-calico--apiserver--dcfc9864--8nhqr-eth0" Apr 13 19:24:45.639250 containerd[1480]: 2026-04-13 19:24:45.620 [INFO][5528] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 19:24:45.639250 containerd[1480]: 2026-04-13 19:24:45.620 [INFO][5528] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 19:24:45.639250 containerd[1480]: 2026-04-13 19:24:45.633 [WARNING][5528] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="69c136b4b471d49f60e5e8165d7e298a00591f3210c700c53fd9c0c7efd6ee86" HandleID="k8s-pod-network.69c136b4b471d49f60e5e8165d7e298a00591f3210c700c53fd9c0c7efd6ee86" Workload="ci--4081--3--7--e--ee64700b2a-k8s-calico--apiserver--dcfc9864--8nhqr-eth0" Apr 13 19:24:45.639250 containerd[1480]: 2026-04-13 19:24:45.633 [INFO][5528] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="69c136b4b471d49f60e5e8165d7e298a00591f3210c700c53fd9c0c7efd6ee86" HandleID="k8s-pod-network.69c136b4b471d49f60e5e8165d7e298a00591f3210c700c53fd9c0c7efd6ee86" Workload="ci--4081--3--7--e--ee64700b2a-k8s-calico--apiserver--dcfc9864--8nhqr-eth0" Apr 13 19:24:45.639250 containerd[1480]: 2026-04-13 19:24:45.635 [INFO][5528] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 19:24:45.639250 containerd[1480]: 2026-04-13 19:24:45.637 [INFO][5521] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="69c136b4b471d49f60e5e8165d7e298a00591f3210c700c53fd9c0c7efd6ee86" Apr 13 19:24:45.640383 containerd[1480]: time="2026-04-13T19:24:45.639511434Z" level=info msg="TearDown network for sandbox \"69c136b4b471d49f60e5e8165d7e298a00591f3210c700c53fd9c0c7efd6ee86\" successfully" Apr 13 19:24:45.640383 containerd[1480]: time="2026-04-13T19:24:45.639546434Z" level=info msg="StopPodSandbox for \"69c136b4b471d49f60e5e8165d7e298a00591f3210c700c53fd9c0c7efd6ee86\" returns successfully" Apr 13 19:24:45.641059 containerd[1480]: time="2026-04-13T19:24:45.640657551Z" level=info msg="RemovePodSandbox for \"69c136b4b471d49f60e5e8165d7e298a00591f3210c700c53fd9c0c7efd6ee86\"" Apr 13 19:24:45.641059 containerd[1480]: time="2026-04-13T19:24:45.640691951Z" level=info msg="Forcibly stopping sandbox \"69c136b4b471d49f60e5e8165d7e298a00591f3210c700c53fd9c0c7efd6ee86\"" Apr 13 19:24:45.734518 containerd[1480]: 2026-04-13 19:24:45.681 [WARNING][5542] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="69c136b4b471d49f60e5e8165d7e298a00591f3210c700c53fd9c0c7efd6ee86" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--e--ee64700b2a-k8s-calico--apiserver--dcfc9864--8nhqr-eth0", GenerateName:"calico-apiserver-dcfc9864-", Namespace:"calico-system", SelfLink:"", UID:"b2a4adae-1965-4b2d-8d19-84656dfbab7d", ResourceVersion:"986", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 19, 24, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"dcfc9864", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-e-ee64700b2a", ContainerID:"f5929a3d881c479ea7e6c922b34ea382f35151a2dde6f70ae22962d85d5af9c7", Pod:"calico-apiserver-dcfc9864-8nhqr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.126.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali94161c4eaef", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 19:24:45.734518 containerd[1480]: 2026-04-13 19:24:45.681 [INFO][5542] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="69c136b4b471d49f60e5e8165d7e298a00591f3210c700c53fd9c0c7efd6ee86" Apr 13 19:24:45.734518 containerd[1480]: 2026-04-13 19:24:45.681 [INFO][5542] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="69c136b4b471d49f60e5e8165d7e298a00591f3210c700c53fd9c0c7efd6ee86" iface="eth0" netns="" Apr 13 19:24:45.734518 containerd[1480]: 2026-04-13 19:24:45.682 [INFO][5542] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="69c136b4b471d49f60e5e8165d7e298a00591f3210c700c53fd9c0c7efd6ee86" Apr 13 19:24:45.734518 containerd[1480]: 2026-04-13 19:24:45.682 [INFO][5542] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="69c136b4b471d49f60e5e8165d7e298a00591f3210c700c53fd9c0c7efd6ee86" Apr 13 19:24:45.734518 containerd[1480]: 2026-04-13 19:24:45.713 [INFO][5550] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="69c136b4b471d49f60e5e8165d7e298a00591f3210c700c53fd9c0c7efd6ee86" HandleID="k8s-pod-network.69c136b4b471d49f60e5e8165d7e298a00591f3210c700c53fd9c0c7efd6ee86" Workload="ci--4081--3--7--e--ee64700b2a-k8s-calico--apiserver--dcfc9864--8nhqr-eth0" Apr 13 19:24:45.734518 containerd[1480]: 2026-04-13 19:24:45.714 [INFO][5550] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 19:24:45.734518 containerd[1480]: 2026-04-13 19:24:45.714 [INFO][5550] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 19:24:45.734518 containerd[1480]: 2026-04-13 19:24:45.727 [WARNING][5550] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="69c136b4b471d49f60e5e8165d7e298a00591f3210c700c53fd9c0c7efd6ee86" HandleID="k8s-pod-network.69c136b4b471d49f60e5e8165d7e298a00591f3210c700c53fd9c0c7efd6ee86" Workload="ci--4081--3--7--e--ee64700b2a-k8s-calico--apiserver--dcfc9864--8nhqr-eth0" Apr 13 19:24:45.734518 containerd[1480]: 2026-04-13 19:24:45.727 [INFO][5550] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="69c136b4b471d49f60e5e8165d7e298a00591f3210c700c53fd9c0c7efd6ee86" HandleID="k8s-pod-network.69c136b4b471d49f60e5e8165d7e298a00591f3210c700c53fd9c0c7efd6ee86" Workload="ci--4081--3--7--e--ee64700b2a-k8s-calico--apiserver--dcfc9864--8nhqr-eth0" Apr 13 19:24:45.734518 containerd[1480]: 2026-04-13 19:24:45.730 [INFO][5550] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 19:24:45.734518 containerd[1480]: 2026-04-13 19:24:45.732 [INFO][5542] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="69c136b4b471d49f60e5e8165d7e298a00591f3210c700c53fd9c0c7efd6ee86" Apr 13 19:24:45.735887 containerd[1480]: time="2026-04-13T19:24:45.735027827Z" level=info msg="TearDown network for sandbox \"69c136b4b471d49f60e5e8165d7e298a00591f3210c700c53fd9c0c7efd6ee86\" successfully" Apr 13 19:24:45.741019 containerd[1480]: time="2026-04-13T19:24:45.740741694Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"69c136b4b471d49f60e5e8165d7e298a00591f3210c700c53fd9c0c7efd6ee86\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 13 19:24:45.741019 containerd[1480]: time="2026-04-13T19:24:45.740821454Z" level=info msg="RemovePodSandbox \"69c136b4b471d49f60e5e8165d7e298a00591f3210c700c53fd9c0c7efd6ee86\" returns successfully" Apr 13 19:24:45.741287 containerd[1480]: time="2026-04-13T19:24:45.741263853Z" level=info msg="StopPodSandbox for \"7b81a5a804ac8f4ba227bb5fc08f214c985a4fc0e785c8474ce0c90ea2cbd6d6\"" Apr 13 19:24:45.827852 containerd[1480]: 2026-04-13 19:24:45.784 [WARNING][5564] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7b81a5a804ac8f4ba227bb5fc08f214c985a4fc0e785c8474ce0c90ea2cbd6d6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--e--ee64700b2a-k8s-coredns--7d764666f9--c7f65-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"a4204cf2-a1a4-42c4-99b7-217ce68ed464", ResourceVersion:"930", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 19, 23, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-e-ee64700b2a", ContainerID:"bcc77f05c636f8bc7b233e8a6eb16af9bd0032af641059c2bdc62286dcd4da27", Pod:"coredns-7d764666f9-c7f65", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.126.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali18050ad82e5", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 19:24:45.827852 containerd[1480]: 2026-04-13 19:24:45.784 [INFO][5564] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="7b81a5a804ac8f4ba227bb5fc08f214c985a4fc0e785c8474ce0c90ea2cbd6d6" Apr 13 19:24:45.827852 containerd[1480]: 2026-04-13 19:24:45.784 [INFO][5564] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7b81a5a804ac8f4ba227bb5fc08f214c985a4fc0e785c8474ce0c90ea2cbd6d6" iface="eth0" netns="" Apr 13 19:24:45.827852 containerd[1480]: 2026-04-13 19:24:45.784 [INFO][5564] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="7b81a5a804ac8f4ba227bb5fc08f214c985a4fc0e785c8474ce0c90ea2cbd6d6" Apr 13 19:24:45.827852 containerd[1480]: 2026-04-13 19:24:45.784 [INFO][5564] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="7b81a5a804ac8f4ba227bb5fc08f214c985a4fc0e785c8474ce0c90ea2cbd6d6" Apr 13 19:24:45.827852 containerd[1480]: 2026-04-13 19:24:45.809 [INFO][5571] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="7b81a5a804ac8f4ba227bb5fc08f214c985a4fc0e785c8474ce0c90ea2cbd6d6" HandleID="k8s-pod-network.7b81a5a804ac8f4ba227bb5fc08f214c985a4fc0e785c8474ce0c90ea2cbd6d6" Workload="ci--4081--3--7--e--ee64700b2a-k8s-coredns--7d764666f9--c7f65-eth0" Apr 13 19:24:45.827852 containerd[1480]: 2026-04-13 19:24:45.809 [INFO][5571] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 19:24:45.827852 containerd[1480]: 2026-04-13 19:24:45.809 [INFO][5571] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 19:24:45.827852 containerd[1480]: 2026-04-13 19:24:45.820 [WARNING][5571] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="7b81a5a804ac8f4ba227bb5fc08f214c985a4fc0e785c8474ce0c90ea2cbd6d6" HandleID="k8s-pod-network.7b81a5a804ac8f4ba227bb5fc08f214c985a4fc0e785c8474ce0c90ea2cbd6d6" Workload="ci--4081--3--7--e--ee64700b2a-k8s-coredns--7d764666f9--c7f65-eth0" Apr 13 19:24:45.827852 containerd[1480]: 2026-04-13 19:24:45.820 [INFO][5571] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="7b81a5a804ac8f4ba227bb5fc08f214c985a4fc0e785c8474ce0c90ea2cbd6d6" HandleID="k8s-pod-network.7b81a5a804ac8f4ba227bb5fc08f214c985a4fc0e785c8474ce0c90ea2cbd6d6" Workload="ci--4081--3--7--e--ee64700b2a-k8s-coredns--7d764666f9--c7f65-eth0" Apr 13 19:24:45.827852 containerd[1480]: 2026-04-13 19:24:45.823 [INFO][5571] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 19:24:45.827852 containerd[1480]: 2026-04-13 19:24:45.825 [INFO][5564] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="7b81a5a804ac8f4ba227bb5fc08f214c985a4fc0e785c8474ce0c90ea2cbd6d6" Apr 13 19:24:45.829139 containerd[1480]: time="2026-04-13T19:24:45.828539064Z" level=info msg="TearDown network for sandbox \"7b81a5a804ac8f4ba227bb5fc08f214c985a4fc0e785c8474ce0c90ea2cbd6d6\" successfully" Apr 13 19:24:45.829139 containerd[1480]: time="2026-04-13T19:24:45.828570104Z" level=info msg="StopPodSandbox for \"7b81a5a804ac8f4ba227bb5fc08f214c985a4fc0e785c8474ce0c90ea2cbd6d6\" returns successfully" Apr 13 19:24:45.829139 containerd[1480]: time="2026-04-13T19:24:45.829056223Z" level=info msg="RemovePodSandbox for \"7b81a5a804ac8f4ba227bb5fc08f214c985a4fc0e785c8474ce0c90ea2cbd6d6\"" Apr 13 19:24:45.829139 containerd[1480]: time="2026-04-13T19:24:45.829085743Z" level=info msg="Forcibly stopping sandbox \"7b81a5a804ac8f4ba227bb5fc08f214c985a4fc0e785c8474ce0c90ea2cbd6d6\"" Apr 13 19:24:45.922863 containerd[1480]: 2026-04-13 19:24:45.873 [WARNING][5585] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7b81a5a804ac8f4ba227bb5fc08f214c985a4fc0e785c8474ce0c90ea2cbd6d6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--e--ee64700b2a-k8s-coredns--7d764666f9--c7f65-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"a4204cf2-a1a4-42c4-99b7-217ce68ed464", ResourceVersion:"930", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 19, 23, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-e-ee64700b2a", ContainerID:"bcc77f05c636f8bc7b233e8a6eb16af9bd0032af641059c2bdc62286dcd4da27", Pod:"coredns-7d764666f9-c7f65", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.126.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali18050ad82e5", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 19:24:45.922863 containerd[1480]: 2026-04-13 19:24:45.873 [INFO][5585] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="7b81a5a804ac8f4ba227bb5fc08f214c985a4fc0e785c8474ce0c90ea2cbd6d6" Apr 13 19:24:45.922863 containerd[1480]: 2026-04-13 19:24:45.873 [INFO][5585] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7b81a5a804ac8f4ba227bb5fc08f214c985a4fc0e785c8474ce0c90ea2cbd6d6" iface="eth0" netns="" Apr 13 19:24:45.922863 containerd[1480]: 2026-04-13 19:24:45.873 [INFO][5585] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="7b81a5a804ac8f4ba227bb5fc08f214c985a4fc0e785c8474ce0c90ea2cbd6d6" Apr 13 19:24:45.922863 containerd[1480]: 2026-04-13 19:24:45.873 [INFO][5585] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="7b81a5a804ac8f4ba227bb5fc08f214c985a4fc0e785c8474ce0c90ea2cbd6d6" Apr 13 19:24:45.922863 containerd[1480]: 2026-04-13 19:24:45.905 [INFO][5592] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="7b81a5a804ac8f4ba227bb5fc08f214c985a4fc0e785c8474ce0c90ea2cbd6d6" HandleID="k8s-pod-network.7b81a5a804ac8f4ba227bb5fc08f214c985a4fc0e785c8474ce0c90ea2cbd6d6" Workload="ci--4081--3--7--e--ee64700b2a-k8s-coredns--7d764666f9--c7f65-eth0" Apr 13 19:24:45.922863 containerd[1480]: 2026-04-13 19:24:45.906 [INFO][5592] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 19:24:45.922863 containerd[1480]: 2026-04-13 19:24:45.906 [INFO][5592] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 19:24:45.922863 containerd[1480]: 2026-04-13 19:24:45.916 [WARNING][5592] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="7b81a5a804ac8f4ba227bb5fc08f214c985a4fc0e785c8474ce0c90ea2cbd6d6" HandleID="k8s-pod-network.7b81a5a804ac8f4ba227bb5fc08f214c985a4fc0e785c8474ce0c90ea2cbd6d6" Workload="ci--4081--3--7--e--ee64700b2a-k8s-coredns--7d764666f9--c7f65-eth0" Apr 13 19:24:45.922863 containerd[1480]: 2026-04-13 19:24:45.916 [INFO][5592] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="7b81a5a804ac8f4ba227bb5fc08f214c985a4fc0e785c8474ce0c90ea2cbd6d6" HandleID="k8s-pod-network.7b81a5a804ac8f4ba227bb5fc08f214c985a4fc0e785c8474ce0c90ea2cbd6d6" Workload="ci--4081--3--7--e--ee64700b2a-k8s-coredns--7d764666f9--c7f65-eth0" Apr 13 19:24:45.922863 containerd[1480]: 2026-04-13 19:24:45.918 [INFO][5592] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 19:24:45.922863 containerd[1480]: 2026-04-13 19:24:45.920 [INFO][5585] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="7b81a5a804ac8f4ba227bb5fc08f214c985a4fc0e785c8474ce0c90ea2cbd6d6" Apr 13 19:24:45.923330 containerd[1480]: time="2026-04-13T19:24:45.922916860Z" level=info msg="TearDown network for sandbox \"7b81a5a804ac8f4ba227bb5fc08f214c985a4fc0e785c8474ce0c90ea2cbd6d6\" successfully" Apr 13 19:24:45.929192 containerd[1480]: time="2026-04-13T19:24:45.929123447Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7b81a5a804ac8f4ba227bb5fc08f214c985a4fc0e785c8474ce0c90ea2cbd6d6\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 13 19:24:45.929586 containerd[1480]: time="2026-04-13T19:24:45.929220406Z" level=info msg="RemovePodSandbox \"7b81a5a804ac8f4ba227bb5fc08f214c985a4fc0e785c8474ce0c90ea2cbd6d6\" returns successfully" Apr 13 19:25:14.991454 kubelet[2577]: I0413 19:25:14.991315 2577 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/goldmane-9f7667bb8-224z2" podStartSLOduration=63.382067431 podStartE2EDuration="1m8.991294522s" podCreationTimestamp="2026-04-13 19:24:06 +0000 UTC" firstStartedPulling="2026-04-13 19:24:36.339223755 +0000 UTC m=+52.012707191" lastFinishedPulling="2026-04-13 19:24:41.948450846 +0000 UTC m=+57.621934282" observedRunningTime="2026-04-13 19:24:42.898974419 +0000 UTC m=+58.572457895" watchObservedRunningTime="2026-04-13 19:25:14.991294522 +0000 UTC m=+90.664777958" Apr 13 19:25:31.842156 systemd[1]: run-containerd-runc-k8s.io-341c818dd88e66c265cd9ab267c89b8ba665336c5f5817a19dfd70dfac5db21c-runc.ioGS71.mount: Deactivated successfully. Apr 13 19:26:09.742591 systemd[1]: Started sshd@7-49.13.49.84:22-50.85.169.122:35634.service - OpenSSH per-connection server daemon (50.85.169.122:35634). Apr 13 19:26:09.900254 sshd[5921]: Accepted publickey for core from 50.85.169.122 port 35634 ssh2: RSA SHA256:iZ69s7jdfZeZWl77uzTdj7kYKrt9+aDLOIz6i/Hnoms Apr 13 19:26:09.903951 sshd[5921]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:26:09.913275 systemd-logind[1458]: New session 8 of user core. Apr 13 19:26:09.917648 systemd[1]: Started session-8.scope - Session 8 of User core. Apr 13 19:26:10.120841 sshd[5921]: pam_unix(sshd:session): session closed for user core Apr 13 19:26:10.125875 systemd[1]: sshd@7-49.13.49.84:22-50.85.169.122:35634.service: Deactivated successfully. Apr 13 19:26:10.131553 systemd[1]: session-8.scope: Deactivated successfully. Apr 13 19:26:10.134990 systemd-logind[1458]: Session 8 logged out. Waiting for processes to exit. Apr 13 19:26:10.137724 systemd-logind[1458]: Removed session 8. Apr 13 19:26:15.153541 systemd[1]: Started sshd@8-49.13.49.84:22-50.85.169.122:35646.service - OpenSSH per-connection server daemon (50.85.169.122:35646). Apr 13 19:26:15.278169 sshd[5964]: Accepted publickey for core from 50.85.169.122 port 35646 ssh2: RSA SHA256:iZ69s7jdfZeZWl77uzTdj7kYKrt9+aDLOIz6i/Hnoms Apr 13 19:26:15.280784 sshd[5964]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:26:15.287509 systemd-logind[1458]: New session 9 of user core. Apr 13 19:26:15.292662 systemd[1]: Started session-9.scope - Session 9 of User core. Apr 13 19:26:15.485863 sshd[5964]: pam_unix(sshd:session): session closed for user core Apr 13 19:26:15.493902 systemd-logind[1458]: Session 9 logged out. Waiting for processes to exit. Apr 13 19:26:15.494642 systemd[1]: sshd@8-49.13.49.84:22-50.85.169.122:35646.service: Deactivated successfully. Apr 13 19:26:15.498975 systemd[1]: session-9.scope: Deactivated successfully. Apr 13 19:26:15.500586 systemd-logind[1458]: Removed session 9. Apr 13 19:26:20.522400 systemd[1]: Started sshd@9-49.13.49.84:22-50.85.169.122:57438.service - OpenSSH per-connection server daemon (50.85.169.122:57438). Apr 13 19:26:20.658928 sshd[5978]: Accepted publickey for core from 50.85.169.122 port 57438 ssh2: RSA SHA256:iZ69s7jdfZeZWl77uzTdj7kYKrt9+aDLOIz6i/Hnoms Apr 13 19:26:20.661828 sshd[5978]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:26:20.670584 systemd-logind[1458]: New session 10 of user core. Apr 13 19:26:20.678757 systemd[1]: Started session-10.scope - Session 10 of User core. Apr 13 19:26:20.863711 sshd[5978]: pam_unix(sshd:session): session closed for user core Apr 13 19:26:20.870387 systemd-logind[1458]: Session 10 logged out. Waiting for processes to exit. Apr 13 19:26:20.870511 systemd[1]: sshd@9-49.13.49.84:22-50.85.169.122:57438.service: Deactivated successfully. Apr 13 19:26:20.874202 systemd[1]: session-10.scope: Deactivated successfully. Apr 13 19:26:20.876788 systemd-logind[1458]: Removed session 10. Apr 13 19:26:25.899918 systemd[1]: Started sshd@10-49.13.49.84:22-50.85.169.122:57448.service - OpenSSH per-connection server daemon (50.85.169.122:57448). Apr 13 19:26:26.023646 sshd[5995]: Accepted publickey for core from 50.85.169.122 port 57448 ssh2: RSA SHA256:iZ69s7jdfZeZWl77uzTdj7kYKrt9+aDLOIz6i/Hnoms Apr 13 19:26:26.026021 sshd[5995]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:26:26.033183 systemd-logind[1458]: New session 11 of user core. Apr 13 19:26:26.043745 systemd[1]: Started session-11.scope - Session 11 of User core. Apr 13 19:26:26.232083 sshd[5995]: pam_unix(sshd:session): session closed for user core Apr 13 19:26:26.237400 systemd-logind[1458]: Session 11 logged out. Waiting for processes to exit. Apr 13 19:26:26.237557 systemd[1]: sshd@10-49.13.49.84:22-50.85.169.122:57448.service: Deactivated successfully. Apr 13 19:26:26.240389 systemd[1]: session-11.scope: Deactivated successfully. Apr 13 19:26:26.243145 systemd-logind[1458]: Removed session 11. Apr 13 19:26:31.265915 systemd[1]: Started sshd@11-49.13.49.84:22-50.85.169.122:44896.service - OpenSSH per-connection server daemon (50.85.169.122:44896). Apr 13 19:26:31.402085 sshd[6025]: Accepted publickey for core from 50.85.169.122 port 44896 ssh2: RSA SHA256:iZ69s7jdfZeZWl77uzTdj7kYKrt9+aDLOIz6i/Hnoms Apr 13 19:26:31.405836 sshd[6025]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:26:31.411631 systemd-logind[1458]: New session 12 of user core. Apr 13 19:26:31.419704 systemd[1]: Started session-12.scope - Session 12 of User core. Apr 13 19:26:31.615706 sshd[6025]: pam_unix(sshd:session): session closed for user core Apr 13 19:26:31.622062 systemd[1]: sshd@11-49.13.49.84:22-50.85.169.122:44896.service: Deactivated successfully. Apr 13 19:26:31.626186 systemd[1]: session-12.scope: Deactivated successfully. Apr 13 19:26:31.630238 systemd-logind[1458]: Session 12 logged out. Waiting for processes to exit. Apr 13 19:26:31.651801 systemd[1]: Started sshd@12-49.13.49.84:22-50.85.169.122:44902.service - OpenSSH per-connection server daemon (50.85.169.122:44902). Apr 13 19:26:31.653083 systemd-logind[1458]: Removed session 12. Apr 13 19:26:31.784863 sshd[6039]: Accepted publickey for core from 50.85.169.122 port 44902 ssh2: RSA SHA256:iZ69s7jdfZeZWl77uzTdj7kYKrt9+aDLOIz6i/Hnoms Apr 13 19:26:31.787752 sshd[6039]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:26:31.796522 systemd-logind[1458]: New session 13 of user core. Apr 13 19:26:31.801665 systemd[1]: Started session-13.scope - Session 13 of User core. Apr 13 19:26:32.047997 sshd[6039]: pam_unix(sshd:session): session closed for user core Apr 13 19:26:32.056141 systemd[1]: sshd@12-49.13.49.84:22-50.85.169.122:44902.service: Deactivated successfully. Apr 13 19:26:32.063386 systemd[1]: session-13.scope: Deactivated successfully. Apr 13 19:26:32.065067 systemd-logind[1458]: Session 13 logged out. Waiting for processes to exit. Apr 13 19:26:32.079115 systemd[1]: Started sshd@13-49.13.49.84:22-50.85.169.122:44918.service - OpenSSH per-connection server daemon (50.85.169.122:44918). Apr 13 19:26:32.080922 systemd-logind[1458]: Removed session 13. Apr 13 19:26:32.219141 sshd[6072]: Accepted publickey for core from 50.85.169.122 port 44918 ssh2: RSA SHA256:iZ69s7jdfZeZWl77uzTdj7kYKrt9+aDLOIz6i/Hnoms Apr 13 19:26:32.222453 sshd[6072]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:26:32.229958 systemd-logind[1458]: New session 14 of user core. Apr 13 19:26:32.234643 systemd[1]: Started session-14.scope - Session 14 of User core. Apr 13 19:26:32.425064 sshd[6072]: pam_unix(sshd:session): session closed for user core Apr 13 19:26:32.430660 systemd[1]: sshd@13-49.13.49.84:22-50.85.169.122:44918.service: Deactivated successfully. Apr 13 19:26:32.434481 systemd[1]: session-14.scope: Deactivated successfully. Apr 13 19:26:32.436327 systemd-logind[1458]: Session 14 logged out. Waiting for processes to exit. Apr 13 19:26:32.437891 systemd-logind[1458]: Removed session 14. Apr 13 19:26:37.463960 systemd[1]: Started sshd@14-49.13.49.84:22-50.85.169.122:44928.service - OpenSSH per-connection server daemon (50.85.169.122:44928). Apr 13 19:26:37.582518 sshd[6104]: Accepted publickey for core from 50.85.169.122 port 44928 ssh2: RSA SHA256:iZ69s7jdfZeZWl77uzTdj7kYKrt9+aDLOIz6i/Hnoms Apr 13 19:26:37.585122 sshd[6104]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:26:37.591778 systemd-logind[1458]: New session 15 of user core. Apr 13 19:26:37.599730 systemd[1]: Started session-15.scope - Session 15 of User core. Apr 13 19:26:37.791307 sshd[6104]: pam_unix(sshd:session): session closed for user core Apr 13 19:26:37.800023 systemd[1]: sshd@14-49.13.49.84:22-50.85.169.122:44928.service: Deactivated successfully. Apr 13 19:26:37.803136 systemd[1]: session-15.scope: Deactivated successfully. Apr 13 19:26:37.805149 systemd-logind[1458]: Session 15 logged out. Waiting for processes to exit. Apr 13 19:26:37.819152 systemd-logind[1458]: Removed session 15. Apr 13 19:26:37.827294 systemd[1]: Started sshd@15-49.13.49.84:22-50.85.169.122:44940.service - OpenSSH per-connection server daemon (50.85.169.122:44940). Apr 13 19:26:37.965899 sshd[6116]: Accepted publickey for core from 50.85.169.122 port 44940 ssh2: RSA SHA256:iZ69s7jdfZeZWl77uzTdj7kYKrt9+aDLOIz6i/Hnoms Apr 13 19:26:37.969005 sshd[6116]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:26:37.975878 systemd-logind[1458]: New session 16 of user core. Apr 13 19:26:37.979610 systemd[1]: Started session-16.scope - Session 16 of User core. Apr 13 19:26:38.341771 sshd[6116]: pam_unix(sshd:session): session closed for user core Apr 13 19:26:38.348644 systemd[1]: sshd@15-49.13.49.84:22-50.85.169.122:44940.service: Deactivated successfully. Apr 13 19:26:38.352299 systemd[1]: session-16.scope: Deactivated successfully. Apr 13 19:26:38.353401 systemd-logind[1458]: Session 16 logged out. Waiting for processes to exit. Apr 13 19:26:38.368474 systemd-logind[1458]: Removed session 16. Apr 13 19:26:38.373832 systemd[1]: Started sshd@16-49.13.49.84:22-50.85.169.122:44944.service - OpenSSH per-connection server daemon (50.85.169.122:44944). Apr 13 19:26:38.496338 sshd[6150]: Accepted publickey for core from 50.85.169.122 port 44944 ssh2: RSA SHA256:iZ69s7jdfZeZWl77uzTdj7kYKrt9+aDLOIz6i/Hnoms Apr 13 19:26:38.499703 sshd[6150]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:26:38.507219 systemd-logind[1458]: New session 17 of user core. Apr 13 19:26:38.514744 systemd[1]: Started session-17.scope - Session 17 of User core. Apr 13 19:26:39.599949 sshd[6150]: pam_unix(sshd:session): session closed for user core Apr 13 19:26:39.608080 systemd[1]: sshd@16-49.13.49.84:22-50.85.169.122:44944.service: Deactivated successfully. Apr 13 19:26:39.608130 systemd-logind[1458]: Session 17 logged out. Waiting for processes to exit. Apr 13 19:26:39.615003 systemd[1]: session-17.scope: Deactivated successfully. Apr 13 19:26:39.634855 systemd[1]: Started sshd@17-49.13.49.84:22-50.85.169.122:50776.service - OpenSSH per-connection server daemon (50.85.169.122:50776). Apr 13 19:26:39.636151 systemd-logind[1458]: Removed session 17. Apr 13 19:26:39.763322 sshd[6169]: Accepted publickey for core from 50.85.169.122 port 50776 ssh2: RSA SHA256:iZ69s7jdfZeZWl77uzTdj7kYKrt9+aDLOIz6i/Hnoms Apr 13 19:26:39.766117 sshd[6169]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:26:39.772353 systemd-logind[1458]: New session 18 of user core. Apr 13 19:26:39.782661 systemd[1]: Started session-18.scope - Session 18 of User core. Apr 13 19:26:40.098924 sshd[6169]: pam_unix(sshd:session): session closed for user core Apr 13 19:26:40.108815 systemd-logind[1458]: Session 18 logged out. Waiting for processes to exit. Apr 13 19:26:40.110075 systemd[1]: sshd@17-49.13.49.84:22-50.85.169.122:50776.service: Deactivated successfully. Apr 13 19:26:40.113375 systemd[1]: session-18.scope: Deactivated successfully. Apr 13 19:26:40.128174 systemd-logind[1458]: Removed session 18. Apr 13 19:26:40.133883 systemd[1]: Started sshd@18-49.13.49.84:22-50.85.169.122:50792.service - OpenSSH per-connection server daemon (50.85.169.122:50792). Apr 13 19:26:40.253967 sshd[6185]: Accepted publickey for core from 50.85.169.122 port 50792 ssh2: RSA SHA256:iZ69s7jdfZeZWl77uzTdj7kYKrt9+aDLOIz6i/Hnoms Apr 13 19:26:40.256330 sshd[6185]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:26:40.264646 systemd-logind[1458]: New session 19 of user core. Apr 13 19:26:40.271668 systemd[1]: Started session-19.scope - Session 19 of User core. Apr 13 19:26:40.450713 sshd[6185]: pam_unix(sshd:session): session closed for user core Apr 13 19:26:40.456568 systemd[1]: sshd@18-49.13.49.84:22-50.85.169.122:50792.service: Deactivated successfully. Apr 13 19:26:40.460179 systemd[1]: session-19.scope: Deactivated successfully. Apr 13 19:26:40.462838 systemd-logind[1458]: Session 19 logged out. Waiting for processes to exit. Apr 13 19:26:40.464379 systemd-logind[1458]: Removed session 19. Apr 13 19:26:45.483042 systemd[1]: Started sshd@19-49.13.49.84:22-50.85.169.122:50800.service - OpenSSH per-connection server daemon (50.85.169.122:50800). Apr 13 19:26:45.627329 sshd[6223]: Accepted publickey for core from 50.85.169.122 port 50800 ssh2: RSA SHA256:iZ69s7jdfZeZWl77uzTdj7kYKrt9+aDLOIz6i/Hnoms Apr 13 19:26:45.629676 sshd[6223]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:26:45.638821 systemd-logind[1458]: New session 20 of user core. Apr 13 19:26:45.645837 systemd[1]: Started session-20.scope - Session 20 of User core. Apr 13 19:26:45.828246 sshd[6223]: pam_unix(sshd:session): session closed for user core Apr 13 19:26:45.835729 systemd-logind[1458]: Session 20 logged out. Waiting for processes to exit. Apr 13 19:26:45.836538 systemd[1]: sshd@19-49.13.49.84:22-50.85.169.122:50800.service: Deactivated successfully. Apr 13 19:26:45.840597 systemd[1]: session-20.scope: Deactivated successfully. Apr 13 19:26:45.842272 systemd-logind[1458]: Removed session 20. Apr 13 19:26:50.866929 systemd[1]: Started sshd@20-49.13.49.84:22-50.85.169.122:55056.service - OpenSSH per-connection server daemon (50.85.169.122:55056). Apr 13 19:26:50.994465 sshd[6236]: Accepted publickey for core from 50.85.169.122 port 55056 ssh2: RSA SHA256:iZ69s7jdfZeZWl77uzTdj7kYKrt9+aDLOIz6i/Hnoms Apr 13 19:26:50.996511 sshd[6236]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:26:51.004467 systemd-logind[1458]: New session 21 of user core. Apr 13 19:26:51.010684 systemd[1]: Started session-21.scope - Session 21 of User core. Apr 13 19:26:51.191033 sshd[6236]: pam_unix(sshd:session): session closed for user core Apr 13 19:26:51.198233 systemd[1]: sshd@20-49.13.49.84:22-50.85.169.122:55056.service: Deactivated successfully. Apr 13 19:26:51.201701 systemd[1]: session-21.scope: Deactivated successfully. Apr 13 19:26:51.204616 systemd-logind[1458]: Session 21 logged out. Waiting for processes to exit. Apr 13 19:26:51.206033 systemd-logind[1458]: Removed session 21. Apr 13 19:26:56.229976 systemd[1]: Started sshd@21-49.13.49.84:22-50.85.169.122:55066.service - OpenSSH per-connection server daemon (50.85.169.122:55066). Apr 13 19:26:56.359747 sshd[6251]: Accepted publickey for core from 50.85.169.122 port 55066 ssh2: RSA SHA256:iZ69s7jdfZeZWl77uzTdj7kYKrt9+aDLOIz6i/Hnoms Apr 13 19:26:56.361169 sshd[6251]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:26:56.368927 systemd-logind[1458]: New session 22 of user core. Apr 13 19:26:56.373920 systemd[1]: Started session-22.scope - Session 22 of User core. Apr 13 19:26:56.543304 sshd[6251]: pam_unix(sshd:session): session closed for user core Apr 13 19:26:56.550789 systemd-logind[1458]: Session 22 logged out. Waiting for processes to exit. Apr 13 19:26:56.551868 systemd[1]: sshd@21-49.13.49.84:22-50.85.169.122:55066.service: Deactivated successfully. Apr 13 19:26:56.555847 systemd[1]: session-22.scope: Deactivated successfully. Apr 13 19:26:56.557550 systemd-logind[1458]: Removed session 22. Apr 13 19:27:11.214089 systemd[1]: cri-containerd-78a9d086305e1dfbbc140ac25883a89a7a76de520c9ef1212c1bb1bdad4463cd.scope: Deactivated successfully. Apr 13 19:27:11.214944 systemd[1]: cri-containerd-78a9d086305e1dfbbc140ac25883a89a7a76de520c9ef1212c1bb1bdad4463cd.scope: Consumed 3.743s CPU time, 18.1M memory peak, 0B memory swap peak. Apr 13 19:27:11.224764 kubelet[2577]: E0413 19:27:11.223691 2577 controller.go:251] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:40588->10.0.0.2:2379: read: connection timed out" Apr 13 19:27:11.250001 containerd[1480]: time="2026-04-13T19:27:11.249923926Z" level=info msg="shim disconnected" id=78a9d086305e1dfbbc140ac25883a89a7a76de520c9ef1212c1bb1bdad4463cd namespace=k8s.io Apr 13 19:27:11.250001 containerd[1480]: time="2026-04-13T19:27:11.249999366Z" level=warning msg="cleaning up after shim disconnected" id=78a9d086305e1dfbbc140ac25883a89a7a76de520c9ef1212c1bb1bdad4463cd namespace=k8s.io Apr 13 19:27:11.250001 containerd[1480]: time="2026-04-13T19:27:11.250009766Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 13 19:27:11.252947 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-78a9d086305e1dfbbc140ac25883a89a7a76de520c9ef1212c1bb1bdad4463cd-rootfs.mount: Deactivated successfully. Apr 13 19:27:11.420643 kubelet[2577]: I0413 19:27:11.418581 2577 scope.go:122] "RemoveContainer" containerID="78a9d086305e1dfbbc140ac25883a89a7a76de520c9ef1212c1bb1bdad4463cd" Apr 13 19:27:11.427149 containerd[1480]: time="2026-04-13T19:27:11.426060907Z" level=info msg="CreateContainer within sandbox \"dc8f426915b763996ce0410d616f393932814d4c35c6845eb87789ea73910008\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Apr 13 19:27:11.442701 containerd[1480]: time="2026-04-13T19:27:11.442644071Z" level=info msg="CreateContainer within sandbox \"dc8f426915b763996ce0410d616f393932814d4c35c6845eb87789ea73910008\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"bef60a4751fcd36ba3fa008781f5aa57be68115a8a6d0a81951a0b7d2f002366\"" Apr 13 19:27:11.444995 containerd[1480]: time="2026-04-13T19:27:11.443650033Z" level=info msg="StartContainer for \"bef60a4751fcd36ba3fa008781f5aa57be68115a8a6d0a81951a0b7d2f002366\"" Apr 13 19:27:11.477698 systemd[1]: Started cri-containerd-bef60a4751fcd36ba3fa008781f5aa57be68115a8a6d0a81951a0b7d2f002366.scope - libcontainer container bef60a4751fcd36ba3fa008781f5aa57be68115a8a6d0a81951a0b7d2f002366. Apr 13 19:27:11.521600 containerd[1480]: time="2026-04-13T19:27:11.521548198Z" level=info msg="StartContainer for \"bef60a4751fcd36ba3fa008781f5aa57be68115a8a6d0a81951a0b7d2f002366\" returns successfully" Apr 13 19:27:11.963820 systemd[1]: cri-containerd-90fee36026d85233093cb3d25136bee51295db821c11b7fcb958fd11cdd92ba6.scope: Deactivated successfully. Apr 13 19:27:11.964342 systemd[1]: cri-containerd-90fee36026d85233093cb3d25136bee51295db821c11b7fcb958fd11cdd92ba6.scope: Consumed 16.163s CPU time. Apr 13 19:27:11.997063 containerd[1480]: time="2026-04-13T19:27:11.996985123Z" level=info msg="shim disconnected" id=90fee36026d85233093cb3d25136bee51295db821c11b7fcb958fd11cdd92ba6 namespace=k8s.io Apr 13 19:27:11.997063 containerd[1480]: time="2026-04-13T19:27:11.997058283Z" level=warning msg="cleaning up after shim disconnected" id=90fee36026d85233093cb3d25136bee51295db821c11b7fcb958fd11cdd92ba6 namespace=k8s.io Apr 13 19:27:11.997063 containerd[1480]: time="2026-04-13T19:27:11.997068763Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 13 19:27:12.254655 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-90fee36026d85233093cb3d25136bee51295db821c11b7fcb958fd11cdd92ba6-rootfs.mount: Deactivated successfully. Apr 13 19:27:12.427652 kubelet[2577]: I0413 19:27:12.427620 2577 scope.go:122] "RemoveContainer" containerID="90fee36026d85233093cb3d25136bee51295db821c11b7fcb958fd11cdd92ba6" Apr 13 19:27:12.429863 containerd[1480]: time="2026-04-13T19:27:12.429821562Z" level=info msg="CreateContainer within sandbox \"89cdb0e0d16ea2215ddbbc8336711fb0d5b98860ec61779ab7a07065aaf0dbb8\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Apr 13 19:27:12.451341 containerd[1480]: time="2026-04-13T19:27:12.451286098Z" level=info msg="CreateContainer within sandbox \"89cdb0e0d16ea2215ddbbc8336711fb0d5b98860ec61779ab7a07065aaf0dbb8\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"9f0bdd555aa53147238b031709074f6477d2cd058dc3d80e4d88991462fd0ca3\"" Apr 13 19:27:12.451946 containerd[1480]: time="2026-04-13T19:27:12.451919019Z" level=info msg="StartContainer for \"9f0bdd555aa53147238b031709074f6477d2cd058dc3d80e4d88991462fd0ca3\"" Apr 13 19:27:12.501988 systemd[1]: Started cri-containerd-9f0bdd555aa53147238b031709074f6477d2cd058dc3d80e4d88991462fd0ca3.scope - libcontainer container 9f0bdd555aa53147238b031709074f6477d2cd058dc3d80e4d88991462fd0ca3. Apr 13 19:27:12.533652 containerd[1480]: time="2026-04-13T19:27:12.533540430Z" level=info msg="StartContainer for \"9f0bdd555aa53147238b031709074f6477d2cd058dc3d80e4d88991462fd0ca3\" returns successfully"