Apr 13 19:26:59.904973 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Apr 13 19:26:59.904998 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Mon Apr 13 18:04:44 -00 2026 Apr 13 19:26:59.905009 kernel: KASLR enabled Apr 13 19:26:59.905015 kernel: efi: EFI v2.7 by Ubuntu distribution of EDK II Apr 13 19:26:59.905027 kernel: efi: SMBIOS 3.0=0x139ed0000 MEMATTR=0x138595418 ACPI 2.0=0x136760018 RNG=0x13676e918 MEMRESERVE=0x136b43d18 Apr 13 19:26:59.905033 kernel: random: crng init done Apr 13 19:26:59.905040 kernel: ACPI: Early table checksum verification disabled Apr 13 19:26:59.905046 kernel: ACPI: RSDP 0x0000000136760018 000024 (v02 BOCHS ) Apr 13 19:26:59.905053 kernel: ACPI: XSDT 0x000000013676FE98 00006C (v01 BOCHS BXPC 00000001 01000013) Apr 13 19:26:59.905060 kernel: ACPI: FACP 0x000000013676FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Apr 13 19:26:59.905066 kernel: ACPI: DSDT 0x0000000136767518 001468 (v02 BOCHS BXPC 00000001 BXPC 00000001) Apr 13 19:26:59.905072 kernel: ACPI: APIC 0x000000013676FC18 000108 (v04 BOCHS BXPC 00000001 BXPC 00000001) Apr 13 19:26:59.905078 kernel: ACPI: PPTT 0x000000013676FD98 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Apr 13 19:26:59.905084 kernel: ACPI: GTDT 0x000000013676D898 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Apr 13 19:26:59.905092 kernel: ACPI: MCFG 0x000000013676FF98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 13 19:26:59.905100 kernel: ACPI: SPCR 0x000000013676E818 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Apr 13 19:26:59.905106 kernel: ACPI: DBG2 0x000000013676E898 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Apr 13 19:26:59.905113 kernel: ACPI: IORT 0x000000013676E418 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Apr 13 19:26:59.905119 kernel: ACPI: BGRT 0x000000013676E798 000038 (v01 INTEL EDK2 00000002 01000013) Apr 13 19:26:59.905126 kernel: ACPI: SPCR: console: pl011,mmio32,0x9000000,9600 Apr 13 19:26:59.905132 kernel: NUMA: Failed to initialise from firmware Apr 13 19:26:59.905139 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x0000000139ffffff] Apr 13 19:26:59.905145 kernel: NUMA: NODE_DATA [mem 0x13966f800-0x139674fff] Apr 13 19:26:59.905151 kernel: Zone ranges: Apr 13 19:26:59.905158 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Apr 13 19:26:59.905166 kernel: DMA32 empty Apr 13 19:26:59.905172 kernel: Normal [mem 0x0000000100000000-0x0000000139ffffff] Apr 13 19:26:59.905178 kernel: Movable zone start for each node Apr 13 19:26:59.905185 kernel: Early memory node ranges Apr 13 19:26:59.905191 kernel: node 0: [mem 0x0000000040000000-0x000000013676ffff] Apr 13 19:26:59.905198 kernel: node 0: [mem 0x0000000136770000-0x0000000136b3ffff] Apr 13 19:26:59.905204 kernel: node 0: [mem 0x0000000136b40000-0x0000000139e1ffff] Apr 13 19:26:59.905211 kernel: node 0: [mem 0x0000000139e20000-0x0000000139eaffff] Apr 13 19:26:59.905217 kernel: node 0: [mem 0x0000000139eb0000-0x0000000139ebffff] Apr 13 19:26:59.905223 kernel: node 0: [mem 0x0000000139ec0000-0x0000000139fdffff] Apr 13 19:26:59.905230 kernel: node 0: [mem 0x0000000139fe0000-0x0000000139ffffff] Apr 13 19:26:59.905236 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x0000000139ffffff] Apr 13 19:26:59.905244 kernel: On node 0, zone Normal: 24576 pages in unavailable ranges Apr 13 19:26:59.905250 kernel: psci: probing for conduit method from ACPI. Apr 13 19:26:59.905257 kernel: psci: PSCIv1.1 detected in firmware. Apr 13 19:26:59.905277 kernel: psci: Using standard PSCI v0.2 function IDs Apr 13 19:26:59.905285 kernel: psci: Trusted OS migration not required Apr 13 19:26:59.905292 kernel: psci: SMC Calling Convention v1.1 Apr 13 19:26:59.905301 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Apr 13 19:26:59.905307 kernel: percpu: Embedded 30 pages/cpu s85736 r8192 d28952 u122880 Apr 13 19:26:59.905314 kernel: pcpu-alloc: s85736 r8192 d28952 u122880 alloc=30*4096 Apr 13 19:26:59.905321 kernel: pcpu-alloc: [0] 0 [0] 1 Apr 13 19:26:59.905328 kernel: Detected PIPT I-cache on CPU0 Apr 13 19:26:59.905335 kernel: CPU features: detected: GIC system register CPU interface Apr 13 19:26:59.905341 kernel: CPU features: detected: Hardware dirty bit management Apr 13 19:26:59.905348 kernel: CPU features: detected: Spectre-v4 Apr 13 19:26:59.905355 kernel: CPU features: detected: Spectre-BHB Apr 13 19:26:59.905361 kernel: CPU features: kernel page table isolation forced ON by KASLR Apr 13 19:26:59.905370 kernel: CPU features: detected: Kernel page table isolation (KPTI) Apr 13 19:26:59.905376 kernel: CPU features: detected: ARM erratum 1418040 Apr 13 19:26:59.905383 kernel: CPU features: detected: SSBS not fully self-synchronizing Apr 13 19:26:59.905390 kernel: alternatives: applying boot alternatives Apr 13 19:26:59.905398 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=06a955818c1cb85215c4fc3bbca340081bcaba3fb92fe20a32668615ff23854b Apr 13 19:26:59.905405 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Apr 13 19:26:59.905412 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 13 19:26:59.905419 kernel: Fallback order for Node 0: 0 Apr 13 19:26:59.905426 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1008000 Apr 13 19:26:59.905433 kernel: Policy zone: Normal Apr 13 19:26:59.905440 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 13 19:26:59.905449 kernel: software IO TLB: area num 2. Apr 13 19:26:59.905456 kernel: software IO TLB: mapped [mem 0x00000000fbfff000-0x00000000fffff000] (64MB) Apr 13 19:26:59.905463 kernel: Memory: 3882816K/4096000K available (10304K kernel code, 2180K rwdata, 8116K rodata, 39424K init, 897K bss, 213184K reserved, 0K cma-reserved) Apr 13 19:26:59.905470 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Apr 13 19:26:59.905477 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 13 19:26:59.905484 kernel: rcu: RCU event tracing is enabled. Apr 13 19:26:59.905492 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Apr 13 19:26:59.905499 kernel: Trampoline variant of Tasks RCU enabled. Apr 13 19:26:59.905505 kernel: Tracing variant of Tasks RCU enabled. Apr 13 19:26:59.905512 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 13 19:26:59.905519 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Apr 13 19:26:59.905526 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Apr 13 19:26:59.905534 kernel: GICv3: 256 SPIs implemented Apr 13 19:26:59.905541 kernel: GICv3: 0 Extended SPIs implemented Apr 13 19:26:59.905548 kernel: Root IRQ handler: gic_handle_irq Apr 13 19:26:59.905554 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Apr 13 19:26:59.905561 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Apr 13 19:26:59.905568 kernel: ITS [mem 0x08080000-0x0809ffff] Apr 13 19:26:59.905575 kernel: ITS@0x0000000008080000: allocated 8192 Devices @1000c0000 (indirect, esz 8, psz 64K, shr 1) Apr 13 19:26:59.905582 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @1000d0000 (flat, esz 8, psz 64K, shr 1) Apr 13 19:26:59.905588 kernel: GICv3: using LPI property table @0x00000001000e0000 Apr 13 19:26:59.905595 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000001000f0000 Apr 13 19:26:59.905602 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 13 19:26:59.905611 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Apr 13 19:26:59.905618 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Apr 13 19:26:59.905625 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Apr 13 19:26:59.905632 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Apr 13 19:26:59.905639 kernel: Console: colour dummy device 80x25 Apr 13 19:26:59.905646 kernel: ACPI: Core revision 20230628 Apr 13 19:26:59.905654 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Apr 13 19:26:59.905661 kernel: pid_max: default: 32768 minimum: 301 Apr 13 19:26:59.905668 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Apr 13 19:26:59.905675 kernel: landlock: Up and running. Apr 13 19:26:59.905683 kernel: SELinux: Initializing. Apr 13 19:26:59.905690 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 13 19:26:59.905697 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 13 19:26:59.905704 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 13 19:26:59.905712 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 13 19:26:59.905719 kernel: rcu: Hierarchical SRCU implementation. Apr 13 19:26:59.905726 kernel: rcu: Max phase no-delay instances is 400. Apr 13 19:26:59.905733 kernel: Platform MSI: ITS@0x8080000 domain created Apr 13 19:26:59.905740 kernel: PCI/MSI: ITS@0x8080000 domain created Apr 13 19:26:59.905748 kernel: Remapping and enabling EFI services. Apr 13 19:26:59.905755 kernel: smp: Bringing up secondary CPUs ... Apr 13 19:26:59.905762 kernel: Detected PIPT I-cache on CPU1 Apr 13 19:26:59.905770 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Apr 13 19:26:59.905777 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000100100000 Apr 13 19:26:59.905784 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Apr 13 19:26:59.905791 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Apr 13 19:26:59.905798 kernel: smp: Brought up 1 node, 2 CPUs Apr 13 19:26:59.905805 kernel: SMP: Total of 2 processors activated. Apr 13 19:26:59.905812 kernel: CPU features: detected: 32-bit EL0 Support Apr 13 19:26:59.905821 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Apr 13 19:26:59.905829 kernel: CPU features: detected: Common not Private translations Apr 13 19:26:59.905841 kernel: CPU features: detected: CRC32 instructions Apr 13 19:26:59.905849 kernel: CPU features: detected: Enhanced Virtualization Traps Apr 13 19:26:59.905857 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Apr 13 19:26:59.905864 kernel: CPU features: detected: LSE atomic instructions Apr 13 19:26:59.905871 kernel: CPU features: detected: Privileged Access Never Apr 13 19:26:59.905879 kernel: CPU features: detected: RAS Extension Support Apr 13 19:26:59.905888 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Apr 13 19:26:59.905895 kernel: CPU: All CPU(s) started at EL1 Apr 13 19:26:59.905902 kernel: alternatives: applying system-wide alternatives Apr 13 19:26:59.905910 kernel: devtmpfs: initialized Apr 13 19:26:59.905917 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 13 19:26:59.905925 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Apr 13 19:26:59.905942 kernel: pinctrl core: initialized pinctrl subsystem Apr 13 19:26:59.905951 kernel: SMBIOS 3.0.0 present. Apr 13 19:26:59.905960 kernel: DMI: Hetzner vServer/KVM Virtual Machine, BIOS 20171111 11/11/2017 Apr 13 19:26:59.905968 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 13 19:26:59.905975 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Apr 13 19:26:59.905983 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Apr 13 19:26:59.905990 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Apr 13 19:26:59.905997 kernel: audit: initializing netlink subsys (disabled) Apr 13 19:26:59.906005 kernel: audit: type=2000 audit(0.020:1): state=initialized audit_enabled=0 res=1 Apr 13 19:26:59.906012 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 13 19:26:59.906019 kernel: cpuidle: using governor menu Apr 13 19:26:59.906028 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Apr 13 19:26:59.906036 kernel: ASID allocator initialised with 32768 entries Apr 13 19:26:59.906043 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 13 19:26:59.906050 kernel: Serial: AMBA PL011 UART driver Apr 13 19:26:59.906058 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Apr 13 19:26:59.906065 kernel: Modules: 0 pages in range for non-PLT usage Apr 13 19:26:59.906072 kernel: Modules: 509008 pages in range for PLT usage Apr 13 19:26:59.906080 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 13 19:26:59.906087 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Apr 13 19:26:59.906096 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Apr 13 19:26:59.906104 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Apr 13 19:26:59.906111 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 13 19:26:59.906118 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Apr 13 19:26:59.906125 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Apr 13 19:26:59.906133 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Apr 13 19:26:59.906140 kernel: ACPI: Added _OSI(Module Device) Apr 13 19:26:59.906147 kernel: ACPI: Added _OSI(Processor Device) Apr 13 19:26:59.906155 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 13 19:26:59.906164 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Apr 13 19:26:59.906171 kernel: ACPI: Interpreter enabled Apr 13 19:26:59.906178 kernel: ACPI: Using GIC for interrupt routing Apr 13 19:26:59.906186 kernel: ACPI: MCFG table detected, 1 entries Apr 13 19:26:59.906193 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Apr 13 19:26:59.906200 kernel: printk: console [ttyAMA0] enabled Apr 13 19:26:59.906208 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Apr 13 19:26:59.906394 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Apr 13 19:26:59.906494 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Apr 13 19:26:59.906683 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Apr 13 19:26:59.906770 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Apr 13 19:26:59.906852 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Apr 13 19:26:59.906864 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Apr 13 19:26:59.906874 kernel: PCI host bridge to bus 0000:00 Apr 13 19:26:59.907758 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Apr 13 19:26:59.907880 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Apr 13 19:26:59.907980 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Apr 13 19:26:59.908045 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Apr 13 19:26:59.908130 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Apr 13 19:26:59.908206 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x038000 Apr 13 19:26:59.908292 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x11289000-0x11289fff] Apr 13 19:26:59.908366 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000600000-0x8000603fff 64bit pref] Apr 13 19:26:59.908854 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Apr 13 19:26:59.908927 kernel: pci 0000:00:02.0: reg 0x10: [mem 0x11288000-0x11288fff] Apr 13 19:26:59.909032 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Apr 13 19:26:59.909102 kernel: pci 0000:00:02.1: reg 0x10: [mem 0x11287000-0x11287fff] Apr 13 19:26:59.909177 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Apr 13 19:26:59.909242 kernel: pci 0000:00:02.2: reg 0x10: [mem 0x11286000-0x11286fff] Apr 13 19:26:59.909342 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Apr 13 19:26:59.909412 kernel: pci 0000:00:02.3: reg 0x10: [mem 0x11285000-0x11285fff] Apr 13 19:26:59.909484 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Apr 13 19:26:59.909550 kernel: pci 0000:00:02.4: reg 0x10: [mem 0x11284000-0x11284fff] Apr 13 19:26:59.909629 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Apr 13 19:26:59.909694 kernel: pci 0000:00:02.5: reg 0x10: [mem 0x11283000-0x11283fff] Apr 13 19:26:59.909770 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Apr 13 19:26:59.909838 kernel: pci 0000:00:02.6: reg 0x10: [mem 0x11282000-0x11282fff] Apr 13 19:26:59.909911 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Apr 13 19:26:59.912143 kernel: pci 0000:00:02.7: reg 0x10: [mem 0x11281000-0x11281fff] Apr 13 19:26:59.912236 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 Apr 13 19:26:59.912365 kernel: pci 0000:00:03.0: reg 0x10: [mem 0x11280000-0x11280fff] Apr 13 19:26:59.912454 kernel: pci 0000:00:04.0: [1b36:0002] type 00 class 0x070002 Apr 13 19:26:59.912524 kernel: pci 0000:00:04.0: reg 0x10: [io 0x0000-0x0007] Apr 13 19:26:59.912606 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 Apr 13 19:26:59.912694 kernel: pci 0000:01:00.0: reg 0x14: [mem 0x11000000-0x11000fff] Apr 13 19:26:59.912766 kernel: pci 0000:01:00.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Apr 13 19:26:59.912834 kernel: pci 0000:01:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Apr 13 19:26:59.912911 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 Apr 13 19:26:59.914073 kernel: pci 0000:02:00.0: reg 0x10: [mem 0x10e00000-0x10e03fff 64bit] Apr 13 19:26:59.914169 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 Apr 13 19:26:59.914241 kernel: pci 0000:03:00.0: reg 0x14: [mem 0x10c00000-0x10c00fff] Apr 13 19:26:59.914329 kernel: pci 0000:03:00.0: reg 0x20: [mem 0x8000100000-0x8000103fff 64bit pref] Apr 13 19:26:59.914416 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 Apr 13 19:26:59.914486 kernel: pci 0000:04:00.0: reg 0x20: [mem 0x8000200000-0x8000203fff 64bit pref] Apr 13 19:26:59.914572 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 Apr 13 19:26:59.914643 kernel: pci 0000:05:00.0: reg 0x14: [mem 0x10800000-0x10800fff] Apr 13 19:26:59.914712 kernel: pci 0000:05:00.0: reg 0x20: [mem 0x8000300000-0x8000303fff 64bit pref] Apr 13 19:26:59.914788 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 Apr 13 19:26:59.914856 kernel: pci 0000:06:00.0: reg 0x14: [mem 0x10600000-0x10600fff] Apr 13 19:26:59.914924 kernel: pci 0000:06:00.0: reg 0x20: [mem 0x8000400000-0x8000403fff 64bit pref] Apr 13 19:26:59.915571 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 Apr 13 19:26:59.915647 kernel: pci 0000:07:00.0: reg 0x14: [mem 0x10400000-0x10400fff] Apr 13 19:26:59.915714 kernel: pci 0000:07:00.0: reg 0x20: [mem 0x8000500000-0x8000503fff 64bit pref] Apr 13 19:26:59.915805 kernel: pci 0000:07:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Apr 13 19:26:59.915881 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Apr 13 19:26:59.916324 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 01] add_size 100000 add_align 100000 Apr 13 19:26:59.916416 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff] to [bus 01] add_size 100000 add_align 100000 Apr 13 19:26:59.916493 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Apr 13 19:26:59.916559 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Apr 13 19:26:59.916623 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x001fffff] to [bus 02] add_size 100000 add_align 100000 Apr 13 19:26:59.916693 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Apr 13 19:26:59.916759 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 03] add_size 100000 add_align 100000 Apr 13 19:26:59.916824 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff] to [bus 03] add_size 100000 add_align 100000 Apr 13 19:26:59.916894 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Apr 13 19:26:59.916980 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 04] add_size 100000 add_align 100000 Apr 13 19:26:59.917053 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 Apr 13 19:26:59.917123 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Apr 13 19:26:59.917192 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 05] add_size 100000 add_align 100000 Apr 13 19:26:59.917256 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff] to [bus 05] add_size 100000 add_align 100000 Apr 13 19:26:59.917344 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Apr 13 19:26:59.917411 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 06] add_size 100000 add_align 100000 Apr 13 19:26:59.917476 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff] to [bus 06] add_size 100000 add_align 100000 Apr 13 19:26:59.917550 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Apr 13 19:26:59.917615 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 07] add_size 100000 add_align 100000 Apr 13 19:26:59.917681 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff] to [bus 07] add_size 100000 add_align 100000 Apr 13 19:26:59.917751 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Apr 13 19:26:59.917818 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 08] add_size 200000 add_align 100000 Apr 13 19:26:59.917884 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff] to [bus 08] add_size 200000 add_align 100000 Apr 13 19:26:59.920090 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Apr 13 19:26:59.920190 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 09] add_size 200000 add_align 100000 Apr 13 19:26:59.920302 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 09] add_size 200000 add_align 100000 Apr 13 19:26:59.920386 kernel: pci 0000:00:02.0: BAR 14: assigned [mem 0x10000000-0x101fffff] Apr 13 19:26:59.920454 kernel: pci 0000:00:02.0: BAR 15: assigned [mem 0x8000000000-0x80001fffff 64bit pref] Apr 13 19:26:59.920522 kernel: pci 0000:00:02.1: BAR 14: assigned [mem 0x10200000-0x103fffff] Apr 13 19:26:59.920589 kernel: pci 0000:00:02.1: BAR 15: assigned [mem 0x8000200000-0x80003fffff 64bit pref] Apr 13 19:26:59.920660 kernel: pci 0000:00:02.2: BAR 14: assigned [mem 0x10400000-0x105fffff] Apr 13 19:26:59.920733 kernel: pci 0000:00:02.2: BAR 15: assigned [mem 0x8000400000-0x80005fffff 64bit pref] Apr 13 19:26:59.920804 kernel: pci 0000:00:02.3: BAR 14: assigned [mem 0x10600000-0x107fffff] Apr 13 19:26:59.920872 kernel: pci 0000:00:02.3: BAR 15: assigned [mem 0x8000600000-0x80007fffff 64bit pref] Apr 13 19:26:59.920964 kernel: pci 0000:00:02.4: BAR 14: assigned [mem 0x10800000-0x109fffff] Apr 13 19:26:59.922110 kernel: pci 0000:00:02.4: BAR 15: assigned [mem 0x8000800000-0x80009fffff 64bit pref] Apr 13 19:26:59.922205 kernel: pci 0000:00:02.5: BAR 14: assigned [mem 0x10a00000-0x10bfffff] Apr 13 19:26:59.922287 kernel: pci 0000:00:02.5: BAR 15: assigned [mem 0x8000a00000-0x8000bfffff 64bit pref] Apr 13 19:26:59.922372 kernel: pci 0000:00:02.6: BAR 14: assigned [mem 0x10c00000-0x10dfffff] Apr 13 19:26:59.922438 kernel: pci 0000:00:02.6: BAR 15: assigned [mem 0x8000c00000-0x8000dfffff 64bit pref] Apr 13 19:26:59.922506 kernel: pci 0000:00:02.7: BAR 14: assigned [mem 0x10e00000-0x10ffffff] Apr 13 19:26:59.922571 kernel: pci 0000:00:02.7: BAR 15: assigned [mem 0x8000e00000-0x8000ffffff 64bit pref] Apr 13 19:26:59.922639 kernel: pci 0000:00:03.0: BAR 14: assigned [mem 0x11000000-0x111fffff] Apr 13 19:26:59.922705 kernel: pci 0000:00:03.0: BAR 15: assigned [mem 0x8001000000-0x80011fffff 64bit pref] Apr 13 19:26:59.922776 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8001200000-0x8001203fff 64bit pref] Apr 13 19:26:59.922845 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x11200000-0x11200fff] Apr 13 19:26:59.922916 kernel: pci 0000:00:02.0: BAR 0: assigned [mem 0x11201000-0x11201fff] Apr 13 19:26:59.923500 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] Apr 13 19:26:59.923580 kernel: pci 0000:00:02.1: BAR 0: assigned [mem 0x11202000-0x11202fff] Apr 13 19:26:59.923646 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] Apr 13 19:26:59.923715 kernel: pci 0000:00:02.2: BAR 0: assigned [mem 0x11203000-0x11203fff] Apr 13 19:26:59.923781 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] Apr 13 19:26:59.923850 kernel: pci 0000:00:02.3: BAR 0: assigned [mem 0x11204000-0x11204fff] Apr 13 19:26:59.923956 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] Apr 13 19:26:59.924035 kernel: pci 0000:00:02.4: BAR 0: assigned [mem 0x11205000-0x11205fff] Apr 13 19:26:59.924101 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] Apr 13 19:26:59.924169 kernel: pci 0000:00:02.5: BAR 0: assigned [mem 0x11206000-0x11206fff] Apr 13 19:26:59.924233 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] Apr 13 19:26:59.924321 kernel: pci 0000:00:02.6: BAR 0: assigned [mem 0x11207000-0x11207fff] Apr 13 19:26:59.924391 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] Apr 13 19:26:59.924459 kernel: pci 0000:00:02.7: BAR 0: assigned [mem 0x11208000-0x11208fff] Apr 13 19:26:59.924530 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] Apr 13 19:26:59.924598 kernel: pci 0000:00:03.0: BAR 0: assigned [mem 0x11209000-0x11209fff] Apr 13 19:26:59.924665 kernel: pci 0000:00:03.0: BAR 13: assigned [io 0x9000-0x9fff] Apr 13 19:26:59.924738 kernel: pci 0000:00:04.0: BAR 0: assigned [io 0xa000-0xa007] Apr 13 19:26:59.924815 kernel: pci 0000:01:00.0: BAR 6: assigned [mem 0x10000000-0x1007ffff pref] Apr 13 19:26:59.924884 kernel: pci 0000:01:00.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Apr 13 19:26:59.925737 kernel: pci 0000:01:00.0: BAR 1: assigned [mem 0x10080000-0x10080fff] Apr 13 19:26:59.925860 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Apr 13 19:26:59.925985 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Apr 13 19:26:59.926063 kernel: pci 0000:00:02.0: bridge window [mem 0x10000000-0x101fffff] Apr 13 19:26:59.926129 kernel: pci 0000:00:02.0: bridge window [mem 0x8000000000-0x80001fffff 64bit pref] Apr 13 19:26:59.926201 kernel: pci 0000:02:00.0: BAR 0: assigned [mem 0x10200000-0x10203fff 64bit] Apr 13 19:26:59.926287 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Apr 13 19:26:59.926357 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Apr 13 19:26:59.926421 kernel: pci 0000:00:02.1: bridge window [mem 0x10200000-0x103fffff] Apr 13 19:26:59.926485 kernel: pci 0000:00:02.1: bridge window [mem 0x8000200000-0x80003fffff 64bit pref] Apr 13 19:26:59.926558 kernel: pci 0000:03:00.0: BAR 4: assigned [mem 0x8000400000-0x8000403fff 64bit pref] Apr 13 19:26:59.926627 kernel: pci 0000:03:00.0: BAR 1: assigned [mem 0x10400000-0x10400fff] Apr 13 19:26:59.926693 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Apr 13 19:26:59.926758 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Apr 13 19:26:59.926826 kernel: pci 0000:00:02.2: bridge window [mem 0x10400000-0x105fffff] Apr 13 19:26:59.926890 kernel: pci 0000:00:02.2: bridge window [mem 0x8000400000-0x80005fffff 64bit pref] Apr 13 19:26:59.926988 kernel: pci 0000:04:00.0: BAR 4: assigned [mem 0x8000600000-0x8000603fff 64bit pref] Apr 13 19:26:59.927059 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Apr 13 19:26:59.927124 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Apr 13 19:26:59.927189 kernel: pci 0000:00:02.3: bridge window [mem 0x10600000-0x107fffff] Apr 13 19:26:59.927254 kernel: pci 0000:00:02.3: bridge window [mem 0x8000600000-0x80007fffff 64bit pref] Apr 13 19:26:59.927347 kernel: pci 0000:05:00.0: BAR 4: assigned [mem 0x8000800000-0x8000803fff 64bit pref] Apr 13 19:26:59.927423 kernel: pci 0000:05:00.0: BAR 1: assigned [mem 0x10800000-0x10800fff] Apr 13 19:26:59.927496 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Apr 13 19:26:59.927562 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Apr 13 19:26:59.927630 kernel: pci 0000:00:02.4: bridge window [mem 0x10800000-0x109fffff] Apr 13 19:26:59.927696 kernel: pci 0000:00:02.4: bridge window [mem 0x8000800000-0x80009fffff 64bit pref] Apr 13 19:26:59.927770 kernel: pci 0000:06:00.0: BAR 4: assigned [mem 0x8000a00000-0x8000a03fff 64bit pref] Apr 13 19:26:59.927838 kernel: pci 0000:06:00.0: BAR 1: assigned [mem 0x10a00000-0x10a00fff] Apr 13 19:26:59.927906 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Apr 13 19:26:59.928063 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Apr 13 19:26:59.928150 kernel: pci 0000:00:02.5: bridge window [mem 0x10a00000-0x10bfffff] Apr 13 19:26:59.928232 kernel: pci 0000:00:02.5: bridge window [mem 0x8000a00000-0x8000bfffff 64bit pref] Apr 13 19:26:59.928355 kernel: pci 0000:07:00.0: BAR 6: assigned [mem 0x10c00000-0x10c7ffff pref] Apr 13 19:26:59.928430 kernel: pci 0000:07:00.0: BAR 4: assigned [mem 0x8000c00000-0x8000c03fff 64bit pref] Apr 13 19:26:59.928506 kernel: pci 0000:07:00.0: BAR 1: assigned [mem 0x10c80000-0x10c80fff] Apr 13 19:26:59.928573 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Apr 13 19:26:59.928638 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Apr 13 19:26:59.928707 kernel: pci 0000:00:02.6: bridge window [mem 0x10c00000-0x10dfffff] Apr 13 19:26:59.928772 kernel: pci 0000:00:02.6: bridge window [mem 0x8000c00000-0x8000dfffff 64bit pref] Apr 13 19:26:59.928839 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Apr 13 19:26:59.928906 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Apr 13 19:26:59.929003 kernel: pci 0000:00:02.7: bridge window [mem 0x10e00000-0x10ffffff] Apr 13 19:26:59.929090 kernel: pci 0000:00:02.7: bridge window [mem 0x8000e00000-0x8000ffffff 64bit pref] Apr 13 19:26:59.929162 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Apr 13 19:26:59.929228 kernel: pci 0000:00:03.0: bridge window [io 0x9000-0x9fff] Apr 13 19:26:59.929310 kernel: pci 0000:00:03.0: bridge window [mem 0x11000000-0x111fffff] Apr 13 19:26:59.929379 kernel: pci 0000:00:03.0: bridge window [mem 0x8001000000-0x80011fffff 64bit pref] Apr 13 19:26:59.929450 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Apr 13 19:26:59.929510 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Apr 13 19:26:59.929569 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Apr 13 19:26:59.929642 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Apr 13 19:26:59.929705 kernel: pci_bus 0000:01: resource 1 [mem 0x10000000-0x101fffff] Apr 13 19:26:59.929770 kernel: pci_bus 0000:01: resource 2 [mem 0x8000000000-0x80001fffff 64bit pref] Apr 13 19:26:59.929841 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x2fff] Apr 13 19:26:59.929902 kernel: pci_bus 0000:02: resource 1 [mem 0x10200000-0x103fffff] Apr 13 19:26:59.930019 kernel: pci_bus 0000:02: resource 2 [mem 0x8000200000-0x80003fffff 64bit pref] Apr 13 19:26:59.930091 kernel: pci_bus 0000:03: resource 0 [io 0x3000-0x3fff] Apr 13 19:26:59.930160 kernel: pci_bus 0000:03: resource 1 [mem 0x10400000-0x105fffff] Apr 13 19:26:59.930230 kernel: pci_bus 0000:03: resource 2 [mem 0x8000400000-0x80005fffff 64bit pref] Apr 13 19:26:59.930314 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] Apr 13 19:26:59.930376 kernel: pci_bus 0000:04: resource 1 [mem 0x10600000-0x107fffff] Apr 13 19:26:59.930450 kernel: pci_bus 0000:04: resource 2 [mem 0x8000600000-0x80007fffff 64bit pref] Apr 13 19:26:59.930519 kernel: pci_bus 0000:05: resource 0 [io 0x5000-0x5fff] Apr 13 19:26:59.930579 kernel: pci_bus 0000:05: resource 1 [mem 0x10800000-0x109fffff] Apr 13 19:26:59.930638 kernel: pci_bus 0000:05: resource 2 [mem 0x8000800000-0x80009fffff 64bit pref] Apr 13 19:26:59.930720 kernel: pci_bus 0000:06: resource 0 [io 0x6000-0x6fff] Apr 13 19:26:59.930781 kernel: pci_bus 0000:06: resource 1 [mem 0x10a00000-0x10bfffff] Apr 13 19:26:59.930844 kernel: pci_bus 0000:06: resource 2 [mem 0x8000a00000-0x8000bfffff 64bit pref] Apr 13 19:26:59.930911 kernel: pci_bus 0000:07: resource 0 [io 0x7000-0x7fff] Apr 13 19:26:59.931004 kernel: pci_bus 0000:07: resource 1 [mem 0x10c00000-0x10dfffff] Apr 13 19:26:59.931070 kernel: pci_bus 0000:07: resource 2 [mem 0x8000c00000-0x8000dfffff 64bit pref] Apr 13 19:26:59.931139 kernel: pci_bus 0000:08: resource 0 [io 0x8000-0x8fff] Apr 13 19:26:59.931200 kernel: pci_bus 0000:08: resource 1 [mem 0x10e00000-0x10ffffff] Apr 13 19:26:59.931303 kernel: pci_bus 0000:08: resource 2 [mem 0x8000e00000-0x8000ffffff 64bit pref] Apr 13 19:26:59.931394 kernel: pci_bus 0000:09: resource 0 [io 0x9000-0x9fff] Apr 13 19:26:59.931458 kernel: pci_bus 0000:09: resource 1 [mem 0x11000000-0x111fffff] Apr 13 19:26:59.931523 kernel: pci_bus 0000:09: resource 2 [mem 0x8001000000-0x80011fffff 64bit pref] Apr 13 19:26:59.931533 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Apr 13 19:26:59.931542 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Apr 13 19:26:59.931550 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Apr 13 19:26:59.931558 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Apr 13 19:26:59.931566 kernel: iommu: Default domain type: Translated Apr 13 19:26:59.931574 kernel: iommu: DMA domain TLB invalidation policy: strict mode Apr 13 19:26:59.931581 kernel: efivars: Registered efivars operations Apr 13 19:26:59.931589 kernel: vgaarb: loaded Apr 13 19:26:59.931599 kernel: clocksource: Switched to clocksource arch_sys_counter Apr 13 19:26:59.931607 kernel: VFS: Disk quotas dquot_6.6.0 Apr 13 19:26:59.931615 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 13 19:26:59.931623 kernel: pnp: PnP ACPI init Apr 13 19:26:59.931696 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Apr 13 19:26:59.931708 kernel: pnp: PnP ACPI: found 1 devices Apr 13 19:26:59.931716 kernel: NET: Registered PF_INET protocol family Apr 13 19:26:59.931724 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 13 19:26:59.931734 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Apr 13 19:26:59.931742 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 13 19:26:59.931750 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Apr 13 19:26:59.931758 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Apr 13 19:26:59.931766 kernel: TCP: Hash tables configured (established 32768 bind 32768) Apr 13 19:26:59.931774 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 13 19:26:59.931782 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 13 19:26:59.931790 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 13 19:26:59.931865 kernel: pci 0000:02:00.0: enabling device (0000 -> 0002) Apr 13 19:26:59.931877 kernel: PCI: CLS 0 bytes, default 64 Apr 13 19:26:59.931885 kernel: kvm [1]: HYP mode not available Apr 13 19:26:59.931893 kernel: Initialise system trusted keyrings Apr 13 19:26:59.931901 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Apr 13 19:26:59.931909 kernel: Key type asymmetric registered Apr 13 19:26:59.931916 kernel: Asymmetric key parser 'x509' registered Apr 13 19:26:59.931924 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Apr 13 19:26:59.932002 kernel: io scheduler mq-deadline registered Apr 13 19:26:59.932016 kernel: io scheduler kyber registered Apr 13 19:26:59.932029 kernel: io scheduler bfq registered Apr 13 19:26:59.932038 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Apr 13 19:26:59.932128 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 50 Apr 13 19:26:59.932198 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 50 Apr 13 19:26:59.932278 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 13 19:26:59.932353 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 51 Apr 13 19:26:59.932420 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 51 Apr 13 19:26:59.932488 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 13 19:26:59.932558 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 52 Apr 13 19:26:59.932624 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 52 Apr 13 19:26:59.932689 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 13 19:26:59.932758 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 53 Apr 13 19:26:59.932823 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 53 Apr 13 19:26:59.932891 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 13 19:26:59.932974 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 54 Apr 13 19:26:59.933042 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 54 Apr 13 19:26:59.933107 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 13 19:26:59.933193 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 55 Apr 13 19:26:59.933273 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 55 Apr 13 19:26:59.933352 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 13 19:26:59.933422 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 56 Apr 13 19:26:59.933488 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 56 Apr 13 19:26:59.933554 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 13 19:26:59.933621 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 57 Apr 13 19:26:59.933691 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 57 Apr 13 19:26:59.933756 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 13 19:26:59.933767 kernel: ACPI: \_SB_.PCI0.GSI3: Enabled at IRQ 38 Apr 13 19:26:59.933831 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 58 Apr 13 19:26:59.933898 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 58 Apr 13 19:26:59.934044 kernel: pcieport 0000:00:03.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 13 19:26:59.934061 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Apr 13 19:26:59.934071 kernel: ACPI: button: Power Button [PWRB] Apr 13 19:26:59.934080 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Apr 13 19:26:59.934153 kernel: virtio-pci 0000:04:00.0: enabling device (0000 -> 0002) Apr 13 19:26:59.934228 kernel: virtio-pci 0000:07:00.0: enabling device (0000 -> 0002) Apr 13 19:26:59.934240 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 13 19:26:59.934248 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Apr 13 19:26:59.934368 kernel: serial 0000:00:04.0: enabling device (0000 -> 0001) Apr 13 19:26:59.934383 kernel: 0000:00:04.0: ttyS0 at I/O 0xa000 (irq = 45, base_baud = 115200) is a 16550A Apr 13 19:26:59.934391 kernel: thunder_xcv, ver 1.0 Apr 13 19:26:59.934402 kernel: thunder_bgx, ver 1.0 Apr 13 19:26:59.934410 kernel: nicpf, ver 1.0 Apr 13 19:26:59.934418 kernel: nicvf, ver 1.0 Apr 13 19:26:59.934506 kernel: rtc-efi rtc-efi.0: registered as rtc0 Apr 13 19:26:59.934590 kernel: rtc-efi rtc-efi.0: setting system clock to 2026-04-13T19:26:59 UTC (1776108419) Apr 13 19:26:59.934602 kernel: hid: raw HID events driver (C) Jiri Kosina Apr 13 19:26:59.934610 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Apr 13 19:26:59.934618 kernel: watchdog: Delayed init of the lockup detector failed: -19 Apr 13 19:26:59.934629 kernel: watchdog: Hard watchdog permanently disabled Apr 13 19:26:59.934637 kernel: NET: Registered PF_INET6 protocol family Apr 13 19:26:59.934645 kernel: Segment Routing with IPv6 Apr 13 19:26:59.934653 kernel: In-situ OAM (IOAM) with IPv6 Apr 13 19:26:59.934660 kernel: NET: Registered PF_PACKET protocol family Apr 13 19:26:59.934668 kernel: Key type dns_resolver registered Apr 13 19:26:59.934676 kernel: registered taskstats version 1 Apr 13 19:26:59.934684 kernel: Loading compiled-in X.509 certificates Apr 13 19:26:59.934692 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: 51f707dd0fb1eacaaa32bdbd733952de038a5bd7' Apr 13 19:26:59.934702 kernel: Key type .fscrypt registered Apr 13 19:26:59.934709 kernel: Key type fscrypt-provisioning registered Apr 13 19:26:59.934717 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 13 19:26:59.934725 kernel: ima: Allocated hash algorithm: sha1 Apr 13 19:26:59.934733 kernel: ima: No architecture policies found Apr 13 19:26:59.934741 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Apr 13 19:26:59.934748 kernel: clk: Disabling unused clocks Apr 13 19:26:59.934756 kernel: Freeing unused kernel memory: 39424K Apr 13 19:26:59.934764 kernel: Run /init as init process Apr 13 19:26:59.934773 kernel: with arguments: Apr 13 19:26:59.934781 kernel: /init Apr 13 19:26:59.934789 kernel: with environment: Apr 13 19:26:59.934797 kernel: HOME=/ Apr 13 19:26:59.934804 kernel: TERM=linux Apr 13 19:26:59.934814 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 13 19:26:59.934824 systemd[1]: Detected virtualization kvm. Apr 13 19:26:59.934833 systemd[1]: Detected architecture arm64. Apr 13 19:26:59.934845 systemd[1]: Running in initrd. Apr 13 19:26:59.934853 systemd[1]: No hostname configured, using default hostname. Apr 13 19:26:59.934861 systemd[1]: Hostname set to . Apr 13 19:26:59.934872 systemd[1]: Initializing machine ID from VM UUID. Apr 13 19:26:59.934881 systemd[1]: Queued start job for default target initrd.target. Apr 13 19:26:59.934889 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 13 19:26:59.934898 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 13 19:26:59.934906 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 13 19:26:59.934917 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 13 19:26:59.934925 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 13 19:26:59.934989 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 13 19:26:59.935002 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 13 19:26:59.935011 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 13 19:26:59.935019 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 13 19:26:59.935028 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 13 19:26:59.935041 systemd[1]: Reached target paths.target - Path Units. Apr 13 19:26:59.935049 systemd[1]: Reached target slices.target - Slice Units. Apr 13 19:26:59.935058 systemd[1]: Reached target swap.target - Swaps. Apr 13 19:26:59.935066 systemd[1]: Reached target timers.target - Timer Units. Apr 13 19:26:59.935074 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 13 19:26:59.935083 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 13 19:26:59.935091 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 13 19:26:59.935099 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 13 19:26:59.935109 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 13 19:26:59.935117 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 13 19:26:59.935126 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 13 19:26:59.935134 systemd[1]: Reached target sockets.target - Socket Units. Apr 13 19:26:59.935142 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 13 19:26:59.935150 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 13 19:26:59.935159 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 13 19:26:59.935167 systemd[1]: Starting systemd-fsck-usr.service... Apr 13 19:26:59.935175 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 13 19:26:59.935185 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 13 19:26:59.935216 systemd-journald[237]: Collecting audit messages is disabled. Apr 13 19:26:59.935237 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 13 19:26:59.935246 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 13 19:26:59.935256 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 13 19:26:59.935280 systemd[1]: Finished systemd-fsck-usr.service. Apr 13 19:26:59.935290 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 13 19:26:59.935299 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 13 19:26:59.935311 systemd-journald[237]: Journal started Apr 13 19:26:59.935331 systemd-journald[237]: Runtime Journal (/run/log/journal/4349292517b54efeaa07b7bc741edf11) is 8.0M, max 76.6M, 68.6M free. Apr 13 19:26:59.925970 systemd-modules-load[238]: Inserted module 'overlay' Apr 13 19:26:59.943698 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 13 19:26:59.943789 systemd[1]: Started systemd-journald.service - Journal Service. Apr 13 19:26:59.947655 systemd-modules-load[238]: Inserted module 'br_netfilter' Apr 13 19:26:59.950152 kernel: Bridge firewalling registered Apr 13 19:26:59.952550 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 13 19:26:59.954484 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 13 19:26:59.957657 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 13 19:26:59.958739 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 13 19:26:59.967009 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 13 19:26:59.970145 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 13 19:26:59.983576 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 13 19:26:59.985457 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 13 19:26:59.988754 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 13 19:26:59.999890 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 13 19:27:00.004399 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 13 19:27:00.011169 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 13 19:27:00.013123 dracut-cmdline[269]: dracut-dracut-053 Apr 13 19:27:00.018055 dracut-cmdline[269]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=06a955818c1cb85215c4fc3bbca340081bcaba3fb92fe20a32668615ff23854b Apr 13 19:27:00.049548 systemd-resolved[278]: Positive Trust Anchors: Apr 13 19:27:00.050309 systemd-resolved[278]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 13 19:27:00.050344 systemd-resolved[278]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 13 19:27:00.060368 systemd-resolved[278]: Defaulting to hostname 'linux'. Apr 13 19:27:00.062875 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 13 19:27:00.063760 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 13 19:27:00.126022 kernel: SCSI subsystem initialized Apr 13 19:27:00.130985 kernel: Loading iSCSI transport class v2.0-870. Apr 13 19:27:00.139235 kernel: iscsi: registered transport (tcp) Apr 13 19:27:00.153022 kernel: iscsi: registered transport (qla4xxx) Apr 13 19:27:00.153160 kernel: QLogic iSCSI HBA Driver Apr 13 19:27:00.200603 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 13 19:27:00.207117 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 13 19:27:00.227206 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 13 19:27:00.227282 kernel: device-mapper: uevent: version 1.0.3 Apr 13 19:27:00.227955 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Apr 13 19:27:00.278010 kernel: raid6: neonx8 gen() 15666 MB/s Apr 13 19:27:00.295021 kernel: raid6: neonx4 gen() 14244 MB/s Apr 13 19:27:00.312048 kernel: raid6: neonx2 gen() 13202 MB/s Apr 13 19:27:00.329008 kernel: raid6: neonx1 gen() 10406 MB/s Apr 13 19:27:00.345992 kernel: raid6: int64x8 gen() 6902 MB/s Apr 13 19:27:00.363005 kernel: raid6: int64x4 gen() 7289 MB/s Apr 13 19:27:00.379995 kernel: raid6: int64x2 gen() 6086 MB/s Apr 13 19:27:00.396998 kernel: raid6: int64x1 gen() 5012 MB/s Apr 13 19:27:00.397062 kernel: raid6: using algorithm neonx8 gen() 15666 MB/s Apr 13 19:27:00.413992 kernel: raid6: .... xor() 11883 MB/s, rmw enabled Apr 13 19:27:00.414058 kernel: raid6: using neon recovery algorithm Apr 13 19:27:00.418984 kernel: xor: measuring software checksum speed Apr 13 19:27:00.419057 kernel: 8regs : 19802 MB/sec Apr 13 19:27:00.419080 kernel: 32regs : 17286 MB/sec Apr 13 19:27:00.420042 kernel: arm64_neon : 27132 MB/sec Apr 13 19:27:00.420075 kernel: xor: using function: arm64_neon (27132 MB/sec) Apr 13 19:27:00.470992 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 13 19:27:00.486594 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 13 19:27:00.494231 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 13 19:27:00.508303 systemd-udevd[457]: Using default interface naming scheme 'v255'. Apr 13 19:27:00.511829 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 13 19:27:00.523367 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 13 19:27:00.537421 dracut-pre-trigger[465]: rd.md=0: removing MD RAID activation Apr 13 19:27:00.580018 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 13 19:27:00.586226 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 13 19:27:00.648807 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 13 19:27:00.656131 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 13 19:27:00.684255 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 13 19:27:00.686717 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 13 19:27:00.687537 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 13 19:27:00.690914 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 13 19:27:00.700531 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 13 19:27:00.713700 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 13 19:27:00.761067 kernel: scsi host0: Virtio SCSI HBA Apr 13 19:27:00.764538 kernel: scsi 0:0:0:0: CD-ROM QEMU QEMU CD-ROM 2.5+ PQ: 0 ANSI: 5 Apr 13 19:27:00.764623 kernel: scsi 0:0:0:1: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Apr 13 19:27:00.766309 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 13 19:27:00.766445 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 13 19:27:00.769705 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 13 19:27:00.770757 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 13 19:27:00.776916 kernel: ACPI: bus type USB registered Apr 13 19:27:00.770927 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 13 19:27:00.777416 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 13 19:27:00.792732 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 13 19:27:00.798028 kernel: usbcore: registered new interface driver usbfs Apr 13 19:27:00.798057 kernel: usbcore: registered new interface driver hub Apr 13 19:27:00.799974 kernel: usbcore: registered new device driver usb Apr 13 19:27:00.825510 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 13 19:27:00.832206 kernel: sr 0:0:0:0: Power-on or device reset occurred Apr 13 19:27:00.834189 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 13 19:27:00.837092 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Apr 13 19:27:00.837250 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Apr 13 19:27:00.837359 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Apr 13 19:27:00.839708 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Apr 13 19:27:00.839897 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 16x/50x cd/rw xa/form2 cdda tray Apr 13 19:27:00.840035 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Apr 13 19:27:00.840127 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Apr 13 19:27:00.840138 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Apr 13 19:27:00.840962 kernel: hub 1-0:1.0: USB hub found Apr 13 19:27:00.841115 kernel: hub 1-0:1.0: 4 ports detected Apr 13 19:27:00.842559 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Apr 13 19:27:00.842707 kernel: hub 2-0:1.0: USB hub found Apr 13 19:27:00.842805 kernel: hub 2-0:1.0: 4 ports detected Apr 13 19:27:00.843988 kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0 Apr 13 19:27:00.858593 kernel: sd 0:0:0:1: Power-on or device reset occurred Apr 13 19:27:00.858820 kernel: sd 0:0:0:1: [sda] 80003072 512-byte logical blocks: (41.0 GB/38.1 GiB) Apr 13 19:27:00.860163 kernel: sd 0:0:0:1: [sda] Write Protect is off Apr 13 19:27:00.860399 kernel: sd 0:0:0:1: [sda] Mode Sense: 63 00 00 08 Apr 13 19:27:00.860503 kernel: sd 0:0:0:1: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Apr 13 19:27:00.868057 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Apr 13 19:27:00.868123 kernel: GPT:17805311 != 80003071 Apr 13 19:27:00.868133 kernel: GPT:Alternate GPT header not at the end of the disk. Apr 13 19:27:00.868143 kernel: GPT:17805311 != 80003071 Apr 13 19:27:00.868153 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 13 19:27:00.869428 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 13 19:27:00.873458 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 13 19:27:00.877751 kernel: sd 0:0:0:1: [sda] Attached SCSI disk Apr 13 19:27:00.909980 kernel: BTRFS: device fsid ed38fcff-9752-482a-82dd-c0f0fcf94cdd devid 1 transid 33 /dev/sda3 scanned by (udev-worker) (517) Apr 13 19:27:00.914974 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/sda6 scanned by (udev-worker) (509) Apr 13 19:27:00.923998 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Apr 13 19:27:00.933572 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Apr 13 19:27:00.946575 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Apr 13 19:27:00.952888 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Apr 13 19:27:00.954079 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Apr 13 19:27:00.976200 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 13 19:27:00.986038 disk-uuid[580]: Primary Header is updated. Apr 13 19:27:00.986038 disk-uuid[580]: Secondary Entries is updated. Apr 13 19:27:00.986038 disk-uuid[580]: Secondary Header is updated. Apr 13 19:27:00.991968 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 13 19:27:00.997966 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 13 19:27:01.002987 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 13 19:27:01.085011 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Apr 13 19:27:01.230019 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input1 Apr 13 19:27:01.230082 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Apr 13 19:27:01.230315 kernel: usbcore: registered new interface driver usbhid Apr 13 19:27:01.231317 kernel: usbhid: USB HID core driver Apr 13 19:27:01.334032 kernel: usb 1-2: new high-speed USB device number 3 using xhci_hcd Apr 13 19:27:01.462988 kernel: input: QEMU QEMU USB Keyboard as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-2/1-2:1.0/0003:0627:0001.0002/input/input2 Apr 13 19:27:01.517419 kernel: hid-generic 0003:0627:0001.0002: input,hidraw1: USB HID v1.11 Keyboard [QEMU QEMU USB Keyboard] on usb-0000:02:00.0-2/input0 Apr 13 19:27:02.007027 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 13 19:27:02.008382 disk-uuid[581]: The operation has completed successfully. Apr 13 19:27:02.062896 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 13 19:27:02.063106 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 13 19:27:02.089314 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 13 19:27:02.096193 sh[598]: Success Apr 13 19:27:02.112095 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Apr 13 19:27:02.158518 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 13 19:27:02.172380 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 13 19:27:02.174681 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 13 19:27:02.202242 kernel: BTRFS info (device dm-0): first mount of filesystem ed38fcff-9752-482a-82dd-c0f0fcf94cdd Apr 13 19:27:02.202346 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Apr 13 19:27:02.202377 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Apr 13 19:27:02.203336 kernel: BTRFS info (device dm-0): disabling log replay at mount time Apr 13 19:27:02.203380 kernel: BTRFS info (device dm-0): using free space tree Apr 13 19:27:02.211025 kernel: BTRFS info (device dm-0): enabling ssd optimizations Apr 13 19:27:02.213772 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 13 19:27:02.215360 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 13 19:27:02.224233 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 13 19:27:02.228633 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 13 19:27:02.241330 kernel: BTRFS info (device sda6): first mount of filesystem 82e51161-2104-45f8-9ecc-3d62852b78d3 Apr 13 19:27:02.241389 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Apr 13 19:27:02.241400 kernel: BTRFS info (device sda6): using free space tree Apr 13 19:27:02.248231 kernel: BTRFS info (device sda6): enabling ssd optimizations Apr 13 19:27:02.248330 kernel: BTRFS info (device sda6): auto enabling async discard Apr 13 19:27:02.260426 kernel: BTRFS info (device sda6): last unmount of filesystem 82e51161-2104-45f8-9ecc-3d62852b78d3 Apr 13 19:27:02.259988 systemd[1]: mnt-oem.mount: Deactivated successfully. Apr 13 19:27:02.266678 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 13 19:27:02.273431 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 13 19:27:02.348203 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 13 19:27:02.355185 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 13 19:27:02.374583 ignition[698]: Ignition 2.19.0 Apr 13 19:27:02.374598 ignition[698]: Stage: fetch-offline Apr 13 19:27:02.374635 ignition[698]: no configs at "/usr/lib/ignition/base.d" Apr 13 19:27:02.378308 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 13 19:27:02.374643 ignition[698]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 13 19:27:02.374799 ignition[698]: parsed url from cmdline: "" Apr 13 19:27:02.374803 ignition[698]: no config URL provided Apr 13 19:27:02.374807 ignition[698]: reading system config file "/usr/lib/ignition/user.ign" Apr 13 19:27:02.374815 ignition[698]: no config at "/usr/lib/ignition/user.ign" Apr 13 19:27:02.374819 ignition[698]: failed to fetch config: resource requires networking Apr 13 19:27:02.375023 ignition[698]: Ignition finished successfully Apr 13 19:27:02.389199 systemd-networkd[784]: lo: Link UP Apr 13 19:27:02.389211 systemd-networkd[784]: lo: Gained carrier Apr 13 19:27:02.391087 systemd-networkd[784]: Enumeration completed Apr 13 19:27:02.391218 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 13 19:27:02.391871 systemd-networkd[784]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 13 19:27:02.391875 systemd-networkd[784]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 13 19:27:02.392446 systemd[1]: Reached target network.target - Network. Apr 13 19:27:02.393387 systemd-networkd[784]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 13 19:27:02.393391 systemd-networkd[784]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 13 19:27:02.394074 systemd-networkd[784]: eth0: Link UP Apr 13 19:27:02.394078 systemd-networkd[784]: eth0: Gained carrier Apr 13 19:27:02.394084 systemd-networkd[784]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 13 19:27:02.402160 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Apr 13 19:27:02.402607 systemd-networkd[784]: eth1: Link UP Apr 13 19:27:02.402613 systemd-networkd[784]: eth1: Gained carrier Apr 13 19:27:02.402629 systemd-networkd[784]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 13 19:27:02.419711 ignition[790]: Ignition 2.19.0 Apr 13 19:27:02.419736 ignition[790]: Stage: fetch Apr 13 19:27:02.420031 ignition[790]: no configs at "/usr/lib/ignition/base.d" Apr 13 19:27:02.420045 ignition[790]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 13 19:27:02.420161 ignition[790]: parsed url from cmdline: "" Apr 13 19:27:02.420164 ignition[790]: no config URL provided Apr 13 19:27:02.420169 ignition[790]: reading system config file "/usr/lib/ignition/user.ign" Apr 13 19:27:02.420177 ignition[790]: no config at "/usr/lib/ignition/user.ign" Apr 13 19:27:02.420200 ignition[790]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 Apr 13 19:27:02.420897 ignition[790]: GET error: Get "http://169.254.169.254/hetzner/v1/userdata": dial tcp 169.254.169.254:80: connect: network is unreachable Apr 13 19:27:02.449064 systemd-networkd[784]: eth1: DHCPv4 address 10.0.0.3/32 acquired from 10.0.0.1 Apr 13 19:27:02.458101 systemd-networkd[784]: eth0: DHCPv4 address 49.13.63.18/32, gateway 172.31.1.1 acquired from 172.31.1.1 Apr 13 19:27:02.621580 ignition[790]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #2 Apr 13 19:27:02.627736 ignition[790]: GET result: OK Apr 13 19:27:02.627848 ignition[790]: parsing config with SHA512: 891c956932e8db11ed1cfc5c1d937f2b4dc98e54d796c6b1b7aa33e30c6f1f5b9fc68cf66fa186284ae1f211b2ee1af2a0a95a9524c2cdf7923deee0a8cbdbe3 Apr 13 19:27:02.634517 unknown[790]: fetched base config from "system" Apr 13 19:27:02.634534 unknown[790]: fetched base config from "system" Apr 13 19:27:02.635665 ignition[790]: fetch: fetch complete Apr 13 19:27:02.634544 unknown[790]: fetched user config from "hetzner" Apr 13 19:27:02.635670 ignition[790]: fetch: fetch passed Apr 13 19:27:02.635735 ignition[790]: Ignition finished successfully Apr 13 19:27:02.641193 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Apr 13 19:27:02.650284 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 13 19:27:02.665546 ignition[797]: Ignition 2.19.0 Apr 13 19:27:02.665556 ignition[797]: Stage: kargs Apr 13 19:27:02.665746 ignition[797]: no configs at "/usr/lib/ignition/base.d" Apr 13 19:27:02.665756 ignition[797]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 13 19:27:02.666886 ignition[797]: kargs: kargs passed Apr 13 19:27:02.669426 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 13 19:27:02.666956 ignition[797]: Ignition finished successfully Apr 13 19:27:02.675134 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 13 19:27:02.689449 ignition[803]: Ignition 2.19.0 Apr 13 19:27:02.689462 ignition[803]: Stage: disks Apr 13 19:27:02.689629 ignition[803]: no configs at "/usr/lib/ignition/base.d" Apr 13 19:27:02.693786 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 13 19:27:02.689638 ignition[803]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 13 19:27:02.690726 ignition[803]: disks: disks passed Apr 13 19:27:02.690783 ignition[803]: Ignition finished successfully Apr 13 19:27:02.698236 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 13 19:27:02.699007 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 13 19:27:02.700385 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 13 19:27:02.702742 systemd[1]: Reached target sysinit.target - System Initialization. Apr 13 19:27:02.704912 systemd[1]: Reached target basic.target - Basic System. Apr 13 19:27:02.713220 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 13 19:27:02.732146 systemd-fsck[811]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Apr 13 19:27:02.736156 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 13 19:27:02.751212 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 13 19:27:02.802969 kernel: EXT4-fs (sda9): mounted filesystem 775210d8-8fbf-4f17-be2d-56007930061c r/w with ordered data mode. Quota mode: none. Apr 13 19:27:02.804613 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 13 19:27:02.807302 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 13 19:27:02.816090 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 13 19:27:02.820062 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 13 19:27:02.831961 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by mount (819) Apr 13 19:27:02.834857 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Apr 13 19:27:02.837088 kernel: BTRFS info (device sda6): first mount of filesystem 82e51161-2104-45f8-9ecc-3d62852b78d3 Apr 13 19:27:02.837130 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Apr 13 19:27:02.837335 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 13 19:27:02.841270 kernel: BTRFS info (device sda6): using free space tree Apr 13 19:27:02.839710 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 13 19:27:02.847442 kernel: BTRFS info (device sda6): enabling ssd optimizations Apr 13 19:27:02.847507 kernel: BTRFS info (device sda6): auto enabling async discard Apr 13 19:27:02.846857 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 13 19:27:02.851781 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 13 19:27:02.864608 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 13 19:27:02.900027 coreos-metadata[821]: Apr 13 19:27:02.898 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 Apr 13 19:27:02.902751 coreos-metadata[821]: Apr 13 19:27:02.902 INFO Fetch successful Apr 13 19:27:02.902751 coreos-metadata[821]: Apr 13 19:27:02.902 INFO wrote hostname ci-4081-3-7-8-01d4258341 to /sysroot/etc/hostname Apr 13 19:27:02.909175 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Apr 13 19:27:02.913262 initrd-setup-root[847]: cut: /sysroot/etc/passwd: No such file or directory Apr 13 19:27:02.919556 initrd-setup-root[854]: cut: /sysroot/etc/group: No such file or directory Apr 13 19:27:02.924728 initrd-setup-root[861]: cut: /sysroot/etc/shadow: No such file or directory Apr 13 19:27:02.929903 initrd-setup-root[868]: cut: /sysroot/etc/gshadow: No such file or directory Apr 13 19:27:03.052036 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 13 19:27:03.061103 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 13 19:27:03.065156 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 13 19:27:03.076040 kernel: BTRFS info (device sda6): last unmount of filesystem 82e51161-2104-45f8-9ecc-3d62852b78d3 Apr 13 19:27:03.096003 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 13 19:27:03.105787 ignition[936]: INFO : Ignition 2.19.0 Apr 13 19:27:03.106680 ignition[936]: INFO : Stage: mount Apr 13 19:27:03.107398 ignition[936]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 13 19:27:03.109007 ignition[936]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 13 19:27:03.109905 ignition[936]: INFO : mount: mount passed Apr 13 19:27:03.110545 ignition[936]: INFO : Ignition finished successfully Apr 13 19:27:03.113042 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 13 19:27:03.120103 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 13 19:27:03.203013 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 13 19:27:03.211281 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 13 19:27:03.220990 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (947) Apr 13 19:27:03.224063 kernel: BTRFS info (device sda6): first mount of filesystem 82e51161-2104-45f8-9ecc-3d62852b78d3 Apr 13 19:27:03.224138 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Apr 13 19:27:03.224162 kernel: BTRFS info (device sda6): using free space tree Apr 13 19:27:03.228973 kernel: BTRFS info (device sda6): enabling ssd optimizations Apr 13 19:27:03.229031 kernel: BTRFS info (device sda6): auto enabling async discard Apr 13 19:27:03.233423 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 13 19:27:03.256422 ignition[964]: INFO : Ignition 2.19.0 Apr 13 19:27:03.257149 ignition[964]: INFO : Stage: files Apr 13 19:27:03.257525 ignition[964]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 13 19:27:03.257525 ignition[964]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 13 19:27:03.259043 ignition[964]: DEBUG : files: compiled without relabeling support, skipping Apr 13 19:27:03.260854 ignition[964]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 13 19:27:03.260854 ignition[964]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 13 19:27:03.266334 ignition[964]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 13 19:27:03.267962 ignition[964]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 13 19:27:03.269583 unknown[964]: wrote ssh authorized keys file for user: core Apr 13 19:27:03.270960 ignition[964]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 13 19:27:03.272812 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Apr 13 19:27:03.274233 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Apr 13 19:27:03.274233 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Apr 13 19:27:03.274233 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Apr 13 19:27:03.369635 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Apr 13 19:27:03.506114 systemd-networkd[784]: eth0: Gained IPv6LL Apr 13 19:27:03.578387 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Apr 13 19:27:03.579712 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Apr 13 19:27:03.579712 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Apr 13 19:27:03.579712 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 13 19:27:03.579712 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 13 19:27:03.579712 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 13 19:27:03.579712 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 13 19:27:03.579712 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 13 19:27:03.579712 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 13 19:27:03.579712 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 13 19:27:03.579712 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 13 19:27:03.579712 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-arm64.raw" Apr 13 19:27:03.579712 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-arm64.raw" Apr 13 19:27:03.579712 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-arm64.raw" Apr 13 19:27:03.579712 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.8-arm64.raw: attempt #1 Apr 13 19:27:03.634187 systemd-networkd[784]: eth1: Gained IPv6LL Apr 13 19:27:03.911600 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Apr 13 19:27:04.496541 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-arm64.raw" Apr 13 19:27:04.496541 ignition[964]: INFO : files: op(c): [started] processing unit "containerd.service" Apr 13 19:27:04.502704 ignition[964]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Apr 13 19:27:04.502704 ignition[964]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Apr 13 19:27:04.502704 ignition[964]: INFO : files: op(c): [finished] processing unit "containerd.service" Apr 13 19:27:04.502704 ignition[964]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Apr 13 19:27:04.502704 ignition[964]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 13 19:27:04.502704 ignition[964]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 13 19:27:04.502704 ignition[964]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Apr 13 19:27:04.502704 ignition[964]: INFO : files: op(10): [started] processing unit "coreos-metadata.service" Apr 13 19:27:04.502704 ignition[964]: INFO : files: op(10): op(11): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Apr 13 19:27:04.502704 ignition[964]: INFO : files: op(10): op(11): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Apr 13 19:27:04.502704 ignition[964]: INFO : files: op(10): [finished] processing unit "coreos-metadata.service" Apr 13 19:27:04.502704 ignition[964]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Apr 13 19:27:04.518677 ignition[964]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Apr 13 19:27:04.518677 ignition[964]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 13 19:27:04.518677 ignition[964]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 13 19:27:04.518677 ignition[964]: INFO : files: files passed Apr 13 19:27:04.518677 ignition[964]: INFO : Ignition finished successfully Apr 13 19:27:04.507876 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 13 19:27:04.514115 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 13 19:27:04.524158 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 13 19:27:04.534159 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 13 19:27:04.534991 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 13 19:27:04.538506 initrd-setup-root-after-ignition[997]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 13 19:27:04.540539 initrd-setup-root-after-ignition[993]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 13 19:27:04.542030 initrd-setup-root-after-ignition[993]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 13 19:27:04.543914 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 13 19:27:04.546002 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 13 19:27:04.555209 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 13 19:27:04.585792 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 13 19:27:04.586622 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 13 19:27:04.588337 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 13 19:27:04.588979 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 13 19:27:04.590920 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 13 19:27:04.597172 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 13 19:27:04.610364 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 13 19:27:04.618261 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 13 19:27:04.629473 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 13 19:27:04.630280 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 13 19:27:04.631573 systemd[1]: Stopped target timers.target - Timer Units. Apr 13 19:27:04.632730 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 13 19:27:04.632857 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 13 19:27:04.634529 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 13 19:27:04.635224 systemd[1]: Stopped target basic.target - Basic System. Apr 13 19:27:04.636555 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 13 19:27:04.638449 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 13 19:27:04.640067 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 13 19:27:04.641848 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 13 19:27:04.643051 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 13 19:27:04.644370 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 13 19:27:04.645456 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 13 19:27:04.646602 systemd[1]: Stopped target swap.target - Swaps. Apr 13 19:27:04.647505 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 13 19:27:04.647637 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 13 19:27:04.648924 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 13 19:27:04.649601 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 13 19:27:04.650646 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 13 19:27:04.650727 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 13 19:27:04.651819 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 13 19:27:04.651952 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 13 19:27:04.653590 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 13 19:27:04.653712 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 13 19:27:04.654893 systemd[1]: ignition-files.service: Deactivated successfully. Apr 13 19:27:04.655006 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 13 19:27:04.656224 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Apr 13 19:27:04.656343 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Apr 13 19:27:04.666213 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 13 19:27:04.666990 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 13 19:27:04.667165 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 13 19:27:04.674211 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 13 19:27:04.675635 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 13 19:27:04.676101 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 13 19:27:04.679514 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 13 19:27:04.679804 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 13 19:27:04.689990 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 13 19:27:04.690418 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 13 19:27:04.693014 ignition[1017]: INFO : Ignition 2.19.0 Apr 13 19:27:04.693014 ignition[1017]: INFO : Stage: umount Apr 13 19:27:04.694115 ignition[1017]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 13 19:27:04.694115 ignition[1017]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 13 19:27:04.696422 ignition[1017]: INFO : umount: umount passed Apr 13 19:27:04.696422 ignition[1017]: INFO : Ignition finished successfully Apr 13 19:27:04.698142 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 13 19:27:04.698277 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 13 19:27:04.700357 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 13 19:27:04.700477 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 13 19:27:04.701168 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 13 19:27:04.701220 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 13 19:27:04.704802 systemd[1]: ignition-fetch.service: Deactivated successfully. Apr 13 19:27:04.704848 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Apr 13 19:27:04.705916 systemd[1]: Stopped target network.target - Network. Apr 13 19:27:04.707930 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 13 19:27:04.709447 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 13 19:27:04.710882 systemd[1]: Stopped target paths.target - Path Units. Apr 13 19:27:04.711551 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 13 19:27:04.717068 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 13 19:27:04.719772 systemd[1]: Stopped target slices.target - Slice Units. Apr 13 19:27:04.720461 systemd[1]: Stopped target sockets.target - Socket Units. Apr 13 19:27:04.721521 systemd[1]: iscsid.socket: Deactivated successfully. Apr 13 19:27:04.721567 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 13 19:27:04.722560 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 13 19:27:04.722597 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 13 19:27:04.723665 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 13 19:27:04.723719 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 13 19:27:04.724908 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 13 19:27:04.724963 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 13 19:27:04.726490 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 13 19:27:04.729633 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 13 19:27:04.731400 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 13 19:27:04.731893 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 13 19:27:04.732013 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 13 19:27:04.733097 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 13 19:27:04.733188 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 13 19:27:04.737738 systemd-networkd[784]: eth1: DHCPv6 lease lost Apr 13 19:27:04.738746 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 13 19:27:04.738866 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 13 19:27:04.740127 systemd-networkd[784]: eth0: DHCPv6 lease lost Apr 13 19:27:04.740928 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 13 19:27:04.741037 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 13 19:27:04.742885 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 13 19:27:04.743019 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 13 19:27:04.744548 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 13 19:27:04.744615 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 13 19:27:04.750121 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 13 19:27:04.751574 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 13 19:27:04.751648 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 13 19:27:04.752800 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 13 19:27:04.752844 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 13 19:27:04.754816 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 13 19:27:04.754862 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 13 19:27:04.755743 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 13 19:27:04.767671 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 13 19:27:04.768553 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 13 19:27:04.778348 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 13 19:27:04.779305 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 13 19:27:04.781070 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 13 19:27:04.781136 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 13 19:27:04.782584 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 13 19:27:04.782635 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 13 19:27:04.783403 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 13 19:27:04.783454 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 13 19:27:04.785189 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 13 19:27:04.785275 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 13 19:27:04.786843 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 13 19:27:04.786897 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 13 19:27:04.796298 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 13 19:27:04.797485 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 13 19:27:04.797586 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 13 19:27:04.801318 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 13 19:27:04.801406 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 13 19:27:04.808917 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 13 19:27:04.809138 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 13 19:27:04.810785 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 13 19:27:04.818142 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 13 19:27:04.828057 systemd[1]: Switching root. Apr 13 19:27:04.857373 systemd-journald[237]: Journal stopped Apr 13 19:27:05.836714 systemd-journald[237]: Received SIGTERM from PID 1 (systemd). Apr 13 19:27:05.836782 kernel: SELinux: policy capability network_peer_controls=1 Apr 13 19:27:05.836795 kernel: SELinux: policy capability open_perms=1 Apr 13 19:27:05.836807 kernel: SELinux: policy capability extended_socket_class=1 Apr 13 19:27:05.836820 kernel: SELinux: policy capability always_check_network=0 Apr 13 19:27:05.836830 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 13 19:27:05.836839 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 13 19:27:05.836848 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 13 19:27:05.836858 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 13 19:27:05.836868 systemd[1]: Successfully loaded SELinux policy in 34.034ms. Apr 13 19:27:05.836887 kernel: audit: type=1403 audit(1776108425.083:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Apr 13 19:27:05.836903 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.242ms. Apr 13 19:27:05.836915 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 13 19:27:05.836928 systemd[1]: Detected virtualization kvm. Apr 13 19:27:05.836950 systemd[1]: Detected architecture arm64. Apr 13 19:27:05.836961 systemd[1]: Detected first boot. Apr 13 19:27:05.836974 systemd[1]: Hostname set to . Apr 13 19:27:05.836988 systemd[1]: Initializing machine ID from VM UUID. Apr 13 19:27:05.836999 zram_generator::config[1077]: No configuration found. Apr 13 19:27:05.837009 systemd[1]: Populated /etc with preset unit settings. Apr 13 19:27:05.837024 systemd[1]: Queued start job for default target multi-user.target. Apr 13 19:27:05.837035 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Apr 13 19:27:05.837046 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Apr 13 19:27:05.837057 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Apr 13 19:27:05.837067 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Apr 13 19:27:05.837077 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Apr 13 19:27:05.837088 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Apr 13 19:27:05.837098 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Apr 13 19:27:05.837108 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Apr 13 19:27:05.837120 systemd[1]: Created slice user.slice - User and Session Slice. Apr 13 19:27:05.837131 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 13 19:27:05.837142 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 13 19:27:05.837152 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Apr 13 19:27:05.837163 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Apr 13 19:27:05.837173 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Apr 13 19:27:05.837184 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 13 19:27:05.837194 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Apr 13 19:27:05.837204 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 13 19:27:05.837216 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Apr 13 19:27:05.837227 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 13 19:27:05.837251 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 13 19:27:05.837262 systemd[1]: Reached target slices.target - Slice Units. Apr 13 19:27:05.837272 systemd[1]: Reached target swap.target - Swaps. Apr 13 19:27:05.837282 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Apr 13 19:27:05.837294 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Apr 13 19:27:05.837305 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 13 19:27:05.837315 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 13 19:27:05.837326 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 13 19:27:05.837336 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 13 19:27:05.837346 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 13 19:27:05.837357 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Apr 13 19:27:05.837371 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Apr 13 19:27:05.837381 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Apr 13 19:27:05.837391 systemd[1]: Mounting media.mount - External Media Directory... Apr 13 19:27:05.837403 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Apr 13 19:27:05.837413 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Apr 13 19:27:05.837423 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Apr 13 19:27:05.837433 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Apr 13 19:27:05.837444 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 13 19:27:05.837458 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 13 19:27:05.837470 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Apr 13 19:27:05.837482 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 13 19:27:05.837492 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 13 19:27:05.837502 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 13 19:27:05.837513 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Apr 13 19:27:05.837523 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 13 19:27:05.837534 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 13 19:27:05.837544 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Apr 13 19:27:05.837557 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Apr 13 19:27:05.837566 kernel: fuse: init (API version 7.39) Apr 13 19:27:05.837577 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 13 19:27:05.837587 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 13 19:27:05.837597 kernel: ACPI: bus type drm_connector registered Apr 13 19:27:05.837608 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 13 19:27:05.837618 kernel: loop: module loaded Apr 13 19:27:05.837628 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Apr 13 19:27:05.837639 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 13 19:27:05.837650 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Apr 13 19:27:05.837660 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Apr 13 19:27:05.837692 systemd-journald[1159]: Collecting audit messages is disabled. Apr 13 19:27:05.837714 systemd[1]: Mounted media.mount - External Media Directory. Apr 13 19:27:05.837725 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Apr 13 19:27:05.837736 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Apr 13 19:27:05.837747 systemd-journald[1159]: Journal started Apr 13 19:27:05.837771 systemd-journald[1159]: Runtime Journal (/run/log/journal/4349292517b54efeaa07b7bc741edf11) is 8.0M, max 76.6M, 68.6M free. Apr 13 19:27:05.842250 systemd[1]: Started systemd-journald.service - Journal Service. Apr 13 19:27:05.842829 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Apr 13 19:27:05.846121 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 13 19:27:05.847085 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 13 19:27:05.847288 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Apr 13 19:27:05.848172 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 13 19:27:05.848340 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 13 19:27:05.851314 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 13 19:27:05.851474 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 13 19:27:05.852367 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 13 19:27:05.852520 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 13 19:27:05.853535 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 13 19:27:05.853679 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Apr 13 19:27:05.855508 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 13 19:27:05.857530 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 13 19:27:05.859576 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 13 19:27:05.863270 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 13 19:27:05.866509 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Apr 13 19:27:05.880532 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Apr 13 19:27:05.882344 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 13 19:27:05.890184 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Apr 13 19:27:05.894468 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Apr 13 19:27:05.895227 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 13 19:27:05.904286 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Apr 13 19:27:05.911580 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Apr 13 19:27:05.915126 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 13 19:27:05.925464 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Apr 13 19:27:05.927344 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 13 19:27:05.939243 systemd-journald[1159]: Time spent on flushing to /var/log/journal/4349292517b54efeaa07b7bc741edf11 is 33.366ms for 1108 entries. Apr 13 19:27:05.939243 systemd-journald[1159]: System Journal (/var/log/journal/4349292517b54efeaa07b7bc741edf11) is 8.0M, max 584.8M, 576.8M free. Apr 13 19:27:06.008479 systemd-journald[1159]: Received client request to flush runtime journal. Apr 13 19:27:05.938371 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 13 19:27:05.954317 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 13 19:27:05.962542 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 13 19:27:05.965596 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Apr 13 19:27:05.966560 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Apr 13 19:27:05.979165 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Apr 13 19:27:05.980378 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Apr 13 19:27:05.981348 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Apr 13 19:27:06.004000 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 13 19:27:06.013497 udevadm[1221]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Apr 13 19:27:06.014995 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Apr 13 19:27:06.023160 systemd-tmpfiles[1214]: ACLs are not supported, ignoring. Apr 13 19:27:06.023179 systemd-tmpfiles[1214]: ACLs are not supported, ignoring. Apr 13 19:27:06.027829 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 13 19:27:06.042323 systemd[1]: Starting systemd-sysusers.service - Create System Users... Apr 13 19:27:06.076915 systemd[1]: Finished systemd-sysusers.service - Create System Users. Apr 13 19:27:06.085242 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 13 19:27:06.099406 systemd-tmpfiles[1236]: ACLs are not supported, ignoring. Apr 13 19:27:06.099426 systemd-tmpfiles[1236]: ACLs are not supported, ignoring. Apr 13 19:27:06.106445 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 13 19:27:06.439461 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Apr 13 19:27:06.448286 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 13 19:27:06.477129 systemd-udevd[1242]: Using default interface naming scheme 'v255'. Apr 13 19:27:06.497301 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 13 19:27:06.509191 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 13 19:27:06.529381 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Apr 13 19:27:06.571575 systemd[1]: Found device dev-ttyAMA0.device - /dev/ttyAMA0. Apr 13 19:27:06.645868 systemd[1]: Started systemd-userdbd.service - User Database Manager. Apr 13 19:27:06.665968 kernel: mousedev: PS/2 mouse device common for all mice Apr 13 19:27:06.717361 systemd-networkd[1251]: lo: Link UP Apr 13 19:27:06.717869 systemd-networkd[1251]: lo: Gained carrier Apr 13 19:27:06.720404 systemd-networkd[1251]: Enumeration completed Apr 13 19:27:06.720687 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 13 19:27:06.723084 systemd-networkd[1251]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 13 19:27:06.723091 systemd-networkd[1251]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 13 19:27:06.726323 systemd-networkd[1251]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 13 19:27:06.726405 systemd-networkd[1251]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 13 19:27:06.727793 systemd-networkd[1251]: eth0: Link UP Apr 13 19:27:06.727876 systemd-networkd[1251]: eth0: Gained carrier Apr 13 19:27:06.727974 systemd-networkd[1251]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 13 19:27:06.732097 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Apr 13 19:27:06.733316 systemd-networkd[1251]: eth1: Link UP Apr 13 19:27:06.733397 systemd-networkd[1251]: eth1: Gained carrier Apr 13 19:27:06.733414 systemd-networkd[1251]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 13 19:27:06.738183 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 33 scanned by (udev-worker) (1244) Apr 13 19:27:06.751325 systemd[1]: Condition check resulted in dev-vport2p1.device - /dev/vport2p1 being skipped. Apr 13 19:27:06.751350 systemd[1]: Condition check resulted in dev-virtio\x2dports-org.qemu.guest_agent.0.device - /dev/virtio-ports/org.qemu.guest_agent.0 being skipped. Apr 13 19:27:06.751514 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 13 19:27:06.759138 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 13 19:27:06.772153 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 13 19:27:06.772928 systemd-networkd[1251]: eth1: DHCPv4 address 10.0.0.3/32 acquired from 10.0.0.1 Apr 13 19:27:06.791657 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 13 19:27:06.792324 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 13 19:27:06.792370 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 13 19:27:06.805500 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 13 19:27:06.805688 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 13 19:27:06.807993 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 13 19:27:06.808163 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 13 19:27:06.811282 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 13 19:27:06.815994 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 13 19:27:06.820078 systemd-networkd[1251]: eth0: DHCPv4 address 49.13.63.18/32, gateway 172.31.1.1 acquired from 172.31.1.1 Apr 13 19:27:06.841536 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Apr 13 19:27:06.855140 kernel: [drm] pci: virtio-gpu-pci detected at 0000:00:01.0 Apr 13 19:27:06.855209 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Apr 13 19:27:06.855280 kernel: [drm] features: -context_init Apr 13 19:27:06.856041 kernel: [drm] number of scanouts: 1 Apr 13 19:27:06.856086 kernel: [drm] number of cap sets: 0 Apr 13 19:27:06.857964 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:01.0 on minor 0 Apr 13 19:27:06.860504 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 13 19:27:06.860912 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 13 19:27:06.866208 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 13 19:27:06.869987 kernel: Console: switching to colour frame buffer device 160x50 Apr 13 19:27:06.874495 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Apr 13 19:27:06.880742 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 13 19:27:06.881120 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 13 19:27:06.883588 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 13 19:27:06.951263 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 13 19:27:07.005693 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Apr 13 19:27:07.021310 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Apr 13 19:27:07.038977 lvm[1312]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 13 19:27:07.069736 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Apr 13 19:27:07.071857 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 13 19:27:07.079489 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Apr 13 19:27:07.083412 lvm[1315]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 13 19:27:07.112800 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Apr 13 19:27:07.115964 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 13 19:27:07.116672 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 13 19:27:07.116704 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 13 19:27:07.117391 systemd[1]: Reached target machines.target - Containers. Apr 13 19:27:07.119282 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Apr 13 19:27:07.125208 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Apr 13 19:27:07.130115 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Apr 13 19:27:07.131284 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 13 19:27:07.134444 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Apr 13 19:27:07.139922 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Apr 13 19:27:07.151286 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Apr 13 19:27:07.153776 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Apr 13 19:27:07.156568 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Apr 13 19:27:07.175449 kernel: loop0: detected capacity change from 0 to 114432 Apr 13 19:27:07.184831 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 13 19:27:07.186450 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Apr 13 19:27:07.198973 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Apr 13 19:27:07.214293 kernel: loop1: detected capacity change from 0 to 8 Apr 13 19:27:07.234961 kernel: loop2: detected capacity change from 0 to 114328 Apr 13 19:27:07.268358 kernel: loop3: detected capacity change from 0 to 209336 Apr 13 19:27:07.305970 kernel: loop4: detected capacity change from 0 to 114432 Apr 13 19:27:07.321217 kernel: loop5: detected capacity change from 0 to 8 Apr 13 19:27:07.326038 kernel: loop6: detected capacity change from 0 to 114328 Apr 13 19:27:07.337040 kernel: loop7: detected capacity change from 0 to 209336 Apr 13 19:27:07.355594 (sd-merge)[1336]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. Apr 13 19:27:07.356484 (sd-merge)[1336]: Merged extensions into '/usr'. Apr 13 19:27:07.361629 systemd[1]: Reloading requested from client PID 1323 ('systemd-sysext') (unit systemd-sysext.service)... Apr 13 19:27:07.361645 systemd[1]: Reloading... Apr 13 19:27:07.444305 zram_generator::config[1364]: No configuration found. Apr 13 19:27:07.542315 ldconfig[1319]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 13 19:27:07.565268 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 13 19:27:07.621818 systemd[1]: Reloading finished in 259 ms. Apr 13 19:27:07.639946 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Apr 13 19:27:07.641903 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Apr 13 19:27:07.650080 systemd[1]: Starting ensure-sysext.service... Apr 13 19:27:07.654185 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 13 19:27:07.660167 systemd[1]: Reloading requested from client PID 1408 ('systemctl') (unit ensure-sysext.service)... Apr 13 19:27:07.660303 systemd[1]: Reloading... Apr 13 19:27:07.675843 systemd-tmpfiles[1409]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 13 19:27:07.676199 systemd-tmpfiles[1409]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Apr 13 19:27:07.676922 systemd-tmpfiles[1409]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 13 19:27:07.677400 systemd-tmpfiles[1409]: ACLs are not supported, ignoring. Apr 13 19:27:07.677464 systemd-tmpfiles[1409]: ACLs are not supported, ignoring. Apr 13 19:27:07.682202 systemd-tmpfiles[1409]: Detected autofs mount point /boot during canonicalization of boot. Apr 13 19:27:07.682213 systemd-tmpfiles[1409]: Skipping /boot Apr 13 19:27:07.691564 systemd-tmpfiles[1409]: Detected autofs mount point /boot during canonicalization of boot. Apr 13 19:27:07.691580 systemd-tmpfiles[1409]: Skipping /boot Apr 13 19:27:07.732999 zram_generator::config[1436]: No configuration found. Apr 13 19:27:07.844334 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 13 19:27:07.905956 systemd[1]: Reloading finished in 245 ms. Apr 13 19:27:07.924954 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 13 19:27:07.941198 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 13 19:27:07.949112 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Apr 13 19:27:07.955360 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Apr 13 19:27:07.961148 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 13 19:27:07.972306 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Apr 13 19:27:07.980659 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 13 19:27:07.985176 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 13 19:27:07.992321 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 13 19:27:08.008461 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 13 19:27:08.010923 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 13 19:27:08.015445 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Apr 13 19:27:08.020385 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 13 19:27:08.020548 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 13 19:27:08.023784 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 13 19:27:08.023964 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 13 19:27:08.027549 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 13 19:27:08.030205 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 13 19:27:08.042261 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 13 19:27:08.047670 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 13 19:27:08.053394 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 13 19:27:08.065229 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 13 19:27:08.067644 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 13 19:27:08.072839 systemd[1]: Starting systemd-update-done.service - Update is Completed... Apr 13 19:27:08.075903 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Apr 13 19:27:08.079778 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 13 19:27:08.080822 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 13 19:27:08.083501 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Apr 13 19:27:08.091049 augenrules[1523]: No rules Apr 13 19:27:08.096794 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 13 19:27:08.097088 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 13 19:27:08.100993 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 13 19:27:08.102300 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 13 19:27:08.103184 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 13 19:27:08.119500 systemd[1]: Finished systemd-update-done.service - Update is Completed. Apr 13 19:27:08.123869 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 13 19:27:08.133066 systemd-resolved[1489]: Positive Trust Anchors: Apr 13 19:27:08.133276 systemd-resolved[1489]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 13 19:27:08.133316 systemd-resolved[1489]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 13 19:27:08.133903 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 13 19:27:08.138140 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 13 19:27:08.138814 systemd-resolved[1489]: Using system hostname 'ci-4081-3-7-8-01d4258341'. Apr 13 19:27:08.154007 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 13 19:27:08.160166 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 13 19:27:08.160869 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 13 19:27:08.160931 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 13 19:27:08.161255 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 13 19:27:08.163785 systemd[1]: Finished ensure-sysext.service. Apr 13 19:27:08.165104 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 13 19:27:08.165279 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 13 19:27:08.166863 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 13 19:27:08.167027 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 13 19:27:08.167926 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 13 19:27:08.168089 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 13 19:27:08.168983 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 13 19:27:08.172190 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 13 19:27:08.175156 systemd[1]: Reached target network.target - Network. Apr 13 19:27:08.176904 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 13 19:27:08.177846 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 13 19:27:08.178095 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 13 19:27:08.178323 systemd-networkd[1251]: eth0: Gained IPv6LL Apr 13 19:27:08.188172 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Apr 13 19:27:08.189429 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Apr 13 19:27:08.191918 systemd[1]: Reached target network-online.target - Network is Online. Apr 13 19:27:08.235198 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Apr 13 19:27:08.238343 systemd[1]: Reached target sysinit.target - System Initialization. Apr 13 19:27:08.240827 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Apr 13 19:27:08.241743 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Apr 13 19:27:08.242691 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Apr 13 19:27:08.243513 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 13 19:27:08.243552 systemd[1]: Reached target paths.target - Path Units. Apr 13 19:27:08.244087 systemd[1]: Reached target time-set.target - System Time Set. Apr 13 19:27:08.244869 systemd[1]: Started logrotate.timer - Daily rotation of log files. Apr 13 19:27:08.245765 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Apr 13 19:27:08.246601 systemd[1]: Reached target timers.target - Timer Units. Apr 13 19:27:08.249073 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Apr 13 19:27:08.253733 systemd[1]: Starting docker.socket - Docker Socket for the API... Apr 13 19:27:08.256155 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Apr 13 19:27:08.259669 systemd[1]: Listening on docker.socket - Docker Socket for the API. Apr 13 19:27:08.261091 systemd[1]: Reached target sockets.target - Socket Units. Apr 13 19:27:08.262176 systemd[1]: Reached target basic.target - Basic System. Apr 13 19:27:08.264380 systemd[1]: System is tainted: cgroupsv1 Apr 13 19:27:08.264449 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Apr 13 19:27:08.264482 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Apr 13 19:27:08.265724 systemd-timesyncd[1558]: Contacted time server 46.4.54.78:123 (0.flatcar.pool.ntp.org). Apr 13 19:27:08.265818 systemd-timesyncd[1558]: Initial clock synchronization to Mon 2026-04-13 19:27:08.484788 UTC. Apr 13 19:27:08.272180 systemd[1]: Starting containerd.service - containerd container runtime... Apr 13 19:27:08.274641 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Apr 13 19:27:08.278122 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Apr 13 19:27:08.285431 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Apr 13 19:27:08.290034 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Apr 13 19:27:08.290645 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Apr 13 19:27:08.295808 jq[1567]: false Apr 13 19:27:08.301467 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 19:27:08.310606 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Apr 13 19:27:08.314411 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Apr 13 19:27:08.328502 dbus-daemon[1565]: [system] SELinux support is enabled Apr 13 19:27:08.335143 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Apr 13 19:27:08.339558 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. Apr 13 19:27:08.347002 extend-filesystems[1568]: Found loop4 Apr 13 19:27:08.347002 extend-filesystems[1568]: Found loop5 Apr 13 19:27:08.347002 extend-filesystems[1568]: Found loop6 Apr 13 19:27:08.347002 extend-filesystems[1568]: Found loop7 Apr 13 19:27:08.347002 extend-filesystems[1568]: Found sda Apr 13 19:27:08.347002 extend-filesystems[1568]: Found sda1 Apr 13 19:27:08.347002 extend-filesystems[1568]: Found sda2 Apr 13 19:27:08.347002 extend-filesystems[1568]: Found sda3 Apr 13 19:27:08.347002 extend-filesystems[1568]: Found usr Apr 13 19:27:08.347002 extend-filesystems[1568]: Found sda4 Apr 13 19:27:08.347002 extend-filesystems[1568]: Found sda6 Apr 13 19:27:08.347002 extend-filesystems[1568]: Found sda7 Apr 13 19:27:08.347002 extend-filesystems[1568]: Found sda9 Apr 13 19:27:08.347002 extend-filesystems[1568]: Checking size of /dev/sda9 Apr 13 19:27:08.347949 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Apr 13 19:27:08.351156 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Apr 13 19:27:08.368161 systemd[1]: Starting systemd-logind.service - User Login Management... Apr 13 19:27:08.369584 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Apr 13 19:27:08.376126 systemd[1]: Starting update-engine.service - Update Engine... Apr 13 19:27:08.381266 coreos-metadata[1564]: Apr 13 19:27:08.381 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 Apr 13 19:27:08.387366 coreos-metadata[1564]: Apr 13 19:27:08.384 INFO Fetch successful Apr 13 19:27:08.387366 coreos-metadata[1564]: Apr 13 19:27:08.385 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 Apr 13 19:27:08.387366 coreos-metadata[1564]: Apr 13 19:27:08.386 INFO Fetch successful Apr 13 19:27:08.393167 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Apr 13 19:27:08.396649 systemd[1]: Started dbus.service - D-Bus System Message Bus. Apr 13 19:27:08.405271 extend-filesystems[1568]: Resized partition /dev/sda9 Apr 13 19:27:08.417064 extend-filesystems[1602]: resize2fs 1.47.1 (20-May-2024) Apr 13 19:27:08.414323 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 13 19:27:08.414570 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Apr 13 19:27:08.423167 jq[1590]: true Apr 13 19:27:08.427253 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 9393147 blocks Apr 13 19:27:08.428402 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 13 19:27:08.428641 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Apr 13 19:27:08.466161 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Apr 13 19:27:08.471586 (ntainerd)[1620]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Apr 13 19:27:08.474287 update_engine[1588]: I20260413 19:27:08.472658 1588 main.cc:92] Flatcar Update Engine starting Apr 13 19:27:08.483902 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 13 19:27:08.483983 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Apr 13 19:27:08.491995 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 13 19:27:08.492024 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Apr 13 19:27:08.506131 update_engine[1588]: I20260413 19:27:08.505692 1588 update_check_scheduler.cc:74] Next update check in 2m20s Apr 13 19:27:08.510369 jq[1612]: true Apr 13 19:27:08.516606 systemd[1]: motdgen.service: Deactivated successfully. Apr 13 19:27:08.516845 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Apr 13 19:27:08.548475 tar[1607]: linux-arm64/LICENSE Apr 13 19:27:08.549157 tar[1607]: linux-arm64/helm Apr 13 19:27:08.554384 systemd[1]: Started update-engine.service - Update Engine. Apr 13 19:27:08.565839 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Apr 13 19:27:08.567096 systemd[1]: Started locksmithd.service - Cluster reboot manager. Apr 13 19:27:08.581681 kernel: EXT4-fs (sda9): resized filesystem to 9393147 Apr 13 19:27:08.626584 extend-filesystems[1602]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Apr 13 19:27:08.626584 extend-filesystems[1602]: old_desc_blocks = 1, new_desc_blocks = 5 Apr 13 19:27:08.626584 extend-filesystems[1602]: The filesystem on /dev/sda9 is now 9393147 (4k) blocks long. Apr 13 19:27:08.648243 extend-filesystems[1568]: Resized filesystem in /dev/sda9 Apr 13 19:27:08.648243 extend-filesystems[1568]: Found sr0 Apr 13 19:27:08.627538 systemd-networkd[1251]: eth1: Gained IPv6LL Apr 13 19:27:08.628439 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 13 19:27:08.628749 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Apr 13 19:27:08.688143 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 33 scanned by (udev-worker) (1257) Apr 13 19:27:08.697915 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Apr 13 19:27:08.700206 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Apr 13 19:27:08.731652 locksmithd[1639]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 13 19:27:08.740732 systemd-logind[1586]: New seat seat0. Apr 13 19:27:08.752400 systemd-logind[1586]: Watching system buttons on /dev/input/event0 (Power Button) Apr 13 19:27:08.752423 systemd-logind[1586]: Watching system buttons on /dev/input/event2 (QEMU QEMU USB Keyboard) Apr 13 19:27:08.752707 systemd[1]: Started systemd-logind.service - User Login Management. Apr 13 19:27:08.771097 bash[1671]: Updated "/home/core/.ssh/authorized_keys" Apr 13 19:27:08.774134 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Apr 13 19:27:08.782286 systemd[1]: Starting sshkeys.service... Apr 13 19:27:08.798897 containerd[1620]: time="2026-04-13T19:27:08.798793520Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Apr 13 19:27:08.804801 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Apr 13 19:27:08.814616 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Apr 13 19:27:08.842166 coreos-metadata[1675]: Apr 13 19:27:08.842 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 Apr 13 19:27:08.845342 coreos-metadata[1675]: Apr 13 19:27:08.844 INFO Fetch successful Apr 13 19:27:08.847183 unknown[1675]: wrote ssh authorized keys file for user: core Apr 13 19:27:08.886156 update-ssh-keys[1680]: Updated "/home/core/.ssh/authorized_keys" Apr 13 19:27:08.892274 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Apr 13 19:27:08.897476 systemd[1]: Finished sshkeys.service. Apr 13 19:27:08.908650 containerd[1620]: time="2026-04-13T19:27:08.906929720Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Apr 13 19:27:08.912594 containerd[1620]: time="2026-04-13T19:27:08.912520560Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Apr 13 19:27:08.912715 containerd[1620]: time="2026-04-13T19:27:08.912700760Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Apr 13 19:27:08.912824 containerd[1620]: time="2026-04-13T19:27:08.912784640Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Apr 13 19:27:08.916003 containerd[1620]: time="2026-04-13T19:27:08.915156120Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Apr 13 19:27:08.916003 containerd[1620]: time="2026-04-13T19:27:08.915197520Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Apr 13 19:27:08.916003 containerd[1620]: time="2026-04-13T19:27:08.915356440Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Apr 13 19:27:08.916003 containerd[1620]: time="2026-04-13T19:27:08.915375440Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Apr 13 19:27:08.916003 containerd[1620]: time="2026-04-13T19:27:08.915640440Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 13 19:27:08.916003 containerd[1620]: time="2026-04-13T19:27:08.915657880Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Apr 13 19:27:08.916003 containerd[1620]: time="2026-04-13T19:27:08.915673240Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Apr 13 19:27:08.916003 containerd[1620]: time="2026-04-13T19:27:08.915684240Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Apr 13 19:27:08.916003 containerd[1620]: time="2026-04-13T19:27:08.915806400Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Apr 13 19:27:08.916503 containerd[1620]: time="2026-04-13T19:27:08.916478760Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Apr 13 19:27:08.917015 containerd[1620]: time="2026-04-13T19:27:08.916755240Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 13 19:27:08.917015 containerd[1620]: time="2026-04-13T19:27:08.916777800Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Apr 13 19:27:08.917015 containerd[1620]: time="2026-04-13T19:27:08.916877440Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Apr 13 19:27:08.917127 containerd[1620]: time="2026-04-13T19:27:08.916918680Z" level=info msg="metadata content store policy set" policy=shared Apr 13 19:27:08.923882 containerd[1620]: time="2026-04-13T19:27:08.923810200Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Apr 13 19:27:08.924655 containerd[1620]: time="2026-04-13T19:27:08.924031560Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Apr 13 19:27:08.924655 containerd[1620]: time="2026-04-13T19:27:08.924059760Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Apr 13 19:27:08.924655 containerd[1620]: time="2026-04-13T19:27:08.924130200Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Apr 13 19:27:08.924655 containerd[1620]: time="2026-04-13T19:27:08.924150040Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Apr 13 19:27:08.924655 containerd[1620]: time="2026-04-13T19:27:08.924412640Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Apr 13 19:27:08.926198 containerd[1620]: time="2026-04-13T19:27:08.926170760Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Apr 13 19:27:08.927139 containerd[1620]: time="2026-04-13T19:27:08.927114280Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Apr 13 19:27:08.927275 containerd[1620]: time="2026-04-13T19:27:08.927255720Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Apr 13 19:27:08.928946 containerd[1620]: time="2026-04-13T19:27:08.927385960Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Apr 13 19:27:08.928946 containerd[1620]: time="2026-04-13T19:27:08.927415440Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Apr 13 19:27:08.928946 containerd[1620]: time="2026-04-13T19:27:08.927429440Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Apr 13 19:27:08.928946 containerd[1620]: time="2026-04-13T19:27:08.927457640Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Apr 13 19:27:08.928946 containerd[1620]: time="2026-04-13T19:27:08.927477040Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Apr 13 19:27:08.928946 containerd[1620]: time="2026-04-13T19:27:08.927491680Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Apr 13 19:27:08.928946 containerd[1620]: time="2026-04-13T19:27:08.927503880Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Apr 13 19:27:08.928946 containerd[1620]: time="2026-04-13T19:27:08.927522000Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Apr 13 19:27:08.928946 containerd[1620]: time="2026-04-13T19:27:08.927536280Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Apr 13 19:27:08.928946 containerd[1620]: time="2026-04-13T19:27:08.927559840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Apr 13 19:27:08.928946 containerd[1620]: time="2026-04-13T19:27:08.927574840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Apr 13 19:27:08.928946 containerd[1620]: time="2026-04-13T19:27:08.927588640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Apr 13 19:27:08.928946 containerd[1620]: time="2026-04-13T19:27:08.927606720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Apr 13 19:27:08.928946 containerd[1620]: time="2026-04-13T19:27:08.927619040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Apr 13 19:27:08.929248 containerd[1620]: time="2026-04-13T19:27:08.927631720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Apr 13 19:27:08.929248 containerd[1620]: time="2026-04-13T19:27:08.927644200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Apr 13 19:27:08.929248 containerd[1620]: time="2026-04-13T19:27:08.927657080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Apr 13 19:27:08.929248 containerd[1620]: time="2026-04-13T19:27:08.927670480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Apr 13 19:27:08.929248 containerd[1620]: time="2026-04-13T19:27:08.927705240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Apr 13 19:27:08.929248 containerd[1620]: time="2026-04-13T19:27:08.927720400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Apr 13 19:27:08.929248 containerd[1620]: time="2026-04-13T19:27:08.927732680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Apr 13 19:27:08.929248 containerd[1620]: time="2026-04-13T19:27:08.927748400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Apr 13 19:27:08.929248 containerd[1620]: time="2026-04-13T19:27:08.927763440Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Apr 13 19:27:08.929248 containerd[1620]: time="2026-04-13T19:27:08.927783960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Apr 13 19:27:08.929248 containerd[1620]: time="2026-04-13T19:27:08.927795600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Apr 13 19:27:08.929248 containerd[1620]: time="2026-04-13T19:27:08.927806800Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Apr 13 19:27:08.929248 containerd[1620]: time="2026-04-13T19:27:08.927921280Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Apr 13 19:27:08.930182 containerd[1620]: time="2026-04-13T19:27:08.929518200Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Apr 13 19:27:08.930182 containerd[1620]: time="2026-04-13T19:27:08.929540000Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Apr 13 19:27:08.930182 containerd[1620]: time="2026-04-13T19:27:08.929552400Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Apr 13 19:27:08.930182 containerd[1620]: time="2026-04-13T19:27:08.929562960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Apr 13 19:27:08.930182 containerd[1620]: time="2026-04-13T19:27:08.929581520Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Apr 13 19:27:08.930182 containerd[1620]: time="2026-04-13T19:27:08.929591920Z" level=info msg="NRI interface is disabled by configuration." Apr 13 19:27:08.930182 containerd[1620]: time="2026-04-13T19:27:08.929602160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Apr 13 19:27:08.930398 containerd[1620]: time="2026-04-13T19:27:08.929976000Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Apr 13 19:27:08.930398 containerd[1620]: time="2026-04-13T19:27:08.930034920Z" level=info msg="Connect containerd service" Apr 13 19:27:08.930398 containerd[1620]: time="2026-04-13T19:27:08.930139600Z" level=info msg="using legacy CRI server" Apr 13 19:27:08.930398 containerd[1620]: time="2026-04-13T19:27:08.930147160Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Apr 13 19:27:08.935581 containerd[1620]: time="2026-04-13T19:27:08.933030240Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Apr 13 19:27:08.935581 containerd[1620]: time="2026-04-13T19:27:08.933747840Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 13 19:27:08.935581 containerd[1620]: time="2026-04-13T19:27:08.934294880Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 13 19:27:08.935581 containerd[1620]: time="2026-04-13T19:27:08.934341320Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 13 19:27:08.935581 containerd[1620]: time="2026-04-13T19:27:08.934482000Z" level=info msg="Start subscribing containerd event" Apr 13 19:27:08.935581 containerd[1620]: time="2026-04-13T19:27:08.934530280Z" level=info msg="Start recovering state" Apr 13 19:27:08.935581 containerd[1620]: time="2026-04-13T19:27:08.934587640Z" level=info msg="Start event monitor" Apr 13 19:27:08.935581 containerd[1620]: time="2026-04-13T19:27:08.934598880Z" level=info msg="Start snapshots syncer" Apr 13 19:27:08.935581 containerd[1620]: time="2026-04-13T19:27:08.934607680Z" level=info msg="Start cni network conf syncer for default" Apr 13 19:27:08.935581 containerd[1620]: time="2026-04-13T19:27:08.934615200Z" level=info msg="Start streaming server" Apr 13 19:27:08.934850 systemd[1]: Started containerd.service - containerd container runtime. Apr 13 19:27:08.938332 containerd[1620]: time="2026-04-13T19:27:08.938306480Z" level=info msg="containerd successfully booted in 0.151133s" Apr 13 19:27:09.274559 sshd_keygen[1613]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 13 19:27:09.321379 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Apr 13 19:27:09.333295 systemd[1]: Starting issuegen.service - Generate /run/issue... Apr 13 19:27:09.344339 systemd[1]: issuegen.service: Deactivated successfully. Apr 13 19:27:09.344613 systemd[1]: Finished issuegen.service - Generate /run/issue. Apr 13 19:27:09.357259 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Apr 13 19:27:09.374416 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Apr 13 19:27:09.386310 systemd[1]: Started getty@tty1.service - Getty on tty1. Apr 13 19:27:09.398360 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Apr 13 19:27:09.400284 systemd[1]: Reached target getty.target - Login Prompts. Apr 13 19:27:09.463728 tar[1607]: linux-arm64/README.md Apr 13 19:27:09.482763 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Apr 13 19:27:09.677238 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 19:27:09.679176 systemd[1]: Reached target multi-user.target - Multi-User System. Apr 13 19:27:09.683656 systemd[1]: Startup finished in 6.184s (kernel) + 4.633s (userspace) = 10.817s. Apr 13 19:27:09.685456 (kubelet)[1722]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 13 19:27:10.225524 kubelet[1722]: E0413 19:27:10.225431 1722 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 13 19:27:10.233177 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 13 19:27:10.233425 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 13 19:27:13.175686 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Apr 13 19:27:13.183286 systemd[1]: Started sshd@0-49.13.63.18:22-50.85.169.122:52178.service - OpenSSH per-connection server daemon (50.85.169.122:52178). Apr 13 19:27:13.313983 sshd[1734]: Accepted publickey for core from 50.85.169.122 port 52178 ssh2: RSA SHA256:iZ69s7jdfZeZWl77uzTdj7kYKrt9+aDLOIz6i/Hnoms Apr 13 19:27:13.317335 sshd[1734]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:27:13.328823 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Apr 13 19:27:13.338499 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Apr 13 19:27:13.343108 systemd-logind[1586]: New session 1 of user core. Apr 13 19:27:13.355421 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Apr 13 19:27:13.368832 systemd[1]: Starting user@500.service - User Manager for UID 500... Apr 13 19:27:13.375373 (systemd)[1740]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Apr 13 19:27:13.493412 systemd[1740]: Queued start job for default target default.target. Apr 13 19:27:13.493864 systemd[1740]: Created slice app.slice - User Application Slice. Apr 13 19:27:13.493883 systemd[1740]: Reached target paths.target - Paths. Apr 13 19:27:13.493895 systemd[1740]: Reached target timers.target - Timers. Apr 13 19:27:13.500133 systemd[1740]: Starting dbus.socket - D-Bus User Message Bus Socket... Apr 13 19:27:13.510826 systemd[1740]: Listening on dbus.socket - D-Bus User Message Bus Socket. Apr 13 19:27:13.510901 systemd[1740]: Reached target sockets.target - Sockets. Apr 13 19:27:13.510916 systemd[1740]: Reached target basic.target - Basic System. Apr 13 19:27:13.511024 systemd[1740]: Reached target default.target - Main User Target. Apr 13 19:27:13.511061 systemd[1740]: Startup finished in 127ms. Apr 13 19:27:13.511223 systemd[1]: Started user@500.service - User Manager for UID 500. Apr 13 19:27:13.517278 systemd[1]: Started session-1.scope - Session 1 of User core. Apr 13 19:27:13.644480 systemd[1]: Started sshd@1-49.13.63.18:22-50.85.169.122:52186.service - OpenSSH per-connection server daemon (50.85.169.122:52186). Apr 13 19:27:13.770309 sshd[1752]: Accepted publickey for core from 50.85.169.122 port 52186 ssh2: RSA SHA256:iZ69s7jdfZeZWl77uzTdj7kYKrt9+aDLOIz6i/Hnoms Apr 13 19:27:13.771639 sshd[1752]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:27:13.778692 systemd-logind[1586]: New session 2 of user core. Apr 13 19:27:13.785379 systemd[1]: Started session-2.scope - Session 2 of User core. Apr 13 19:27:13.892604 sshd[1752]: pam_unix(sshd:session): session closed for user core Apr 13 19:27:13.897647 systemd[1]: sshd@1-49.13.63.18:22-50.85.169.122:52186.service: Deactivated successfully. Apr 13 19:27:13.900908 systemd[1]: session-2.scope: Deactivated successfully. Apr 13 19:27:13.902346 systemd-logind[1586]: Session 2 logged out. Waiting for processes to exit. Apr 13 19:27:13.903669 systemd-logind[1586]: Removed session 2. Apr 13 19:27:13.918484 systemd[1]: Started sshd@2-49.13.63.18:22-50.85.169.122:52196.service - OpenSSH per-connection server daemon (50.85.169.122:52196). Apr 13 19:27:14.040055 sshd[1760]: Accepted publickey for core from 50.85.169.122 port 52196 ssh2: RSA SHA256:iZ69s7jdfZeZWl77uzTdj7kYKrt9+aDLOIz6i/Hnoms Apr 13 19:27:14.041772 sshd[1760]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:27:14.049051 systemd-logind[1586]: New session 3 of user core. Apr 13 19:27:14.054504 systemd[1]: Started session-3.scope - Session 3 of User core. Apr 13 19:27:14.157609 sshd[1760]: pam_unix(sshd:session): session closed for user core Apr 13 19:27:14.165579 systemd[1]: sshd@2-49.13.63.18:22-50.85.169.122:52196.service: Deactivated successfully. Apr 13 19:27:14.168811 systemd[1]: session-3.scope: Deactivated successfully. Apr 13 19:27:14.170492 systemd-logind[1586]: Session 3 logged out. Waiting for processes to exit. Apr 13 19:27:14.171361 systemd-logind[1586]: Removed session 3. Apr 13 19:27:14.179520 systemd[1]: Started sshd@3-49.13.63.18:22-50.85.169.122:52202.service - OpenSSH per-connection server daemon (50.85.169.122:52202). Apr 13 19:27:14.315169 sshd[1768]: Accepted publickey for core from 50.85.169.122 port 52202 ssh2: RSA SHA256:iZ69s7jdfZeZWl77uzTdj7kYKrt9+aDLOIz6i/Hnoms Apr 13 19:27:14.316600 sshd[1768]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:27:14.324049 systemd-logind[1586]: New session 4 of user core. Apr 13 19:27:14.330489 systemd[1]: Started session-4.scope - Session 4 of User core. Apr 13 19:27:14.436293 sshd[1768]: pam_unix(sshd:session): session closed for user core Apr 13 19:27:14.441733 systemd[1]: sshd@3-49.13.63.18:22-50.85.169.122:52202.service: Deactivated successfully. Apr 13 19:27:14.445616 systemd-logind[1586]: Session 4 logged out. Waiting for processes to exit. Apr 13 19:27:14.445620 systemd[1]: session-4.scope: Deactivated successfully. Apr 13 19:27:14.447298 systemd-logind[1586]: Removed session 4. Apr 13 19:27:14.457253 systemd[1]: Started sshd@4-49.13.63.18:22-50.85.169.122:52212.service - OpenSSH per-connection server daemon (50.85.169.122:52212). Apr 13 19:27:14.596279 sshd[1776]: Accepted publickey for core from 50.85.169.122 port 52212 ssh2: RSA SHA256:iZ69s7jdfZeZWl77uzTdj7kYKrt9+aDLOIz6i/Hnoms Apr 13 19:27:14.597926 sshd[1776]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:27:14.604629 systemd-logind[1586]: New session 5 of user core. Apr 13 19:27:14.612505 systemd[1]: Started session-5.scope - Session 5 of User core. Apr 13 19:27:14.712512 sudo[1780]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Apr 13 19:27:14.712845 sudo[1780]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 13 19:27:14.729901 sudo[1780]: pam_unix(sudo:session): session closed for user root Apr 13 19:27:14.748362 sshd[1776]: pam_unix(sshd:session): session closed for user core Apr 13 19:27:14.755417 systemd-logind[1586]: Session 5 logged out. Waiting for processes to exit. Apr 13 19:27:14.755530 systemd[1]: sshd@4-49.13.63.18:22-50.85.169.122:52212.service: Deactivated successfully. Apr 13 19:27:14.759512 systemd[1]: session-5.scope: Deactivated successfully. Apr 13 19:27:14.763835 systemd-logind[1586]: Removed session 5. Apr 13 19:27:14.772693 systemd[1]: Started sshd@5-49.13.63.18:22-50.85.169.122:52226.service - OpenSSH per-connection server daemon (50.85.169.122:52226). Apr 13 19:27:14.897822 sshd[1785]: Accepted publickey for core from 50.85.169.122 port 52226 ssh2: RSA SHA256:iZ69s7jdfZeZWl77uzTdj7kYKrt9+aDLOIz6i/Hnoms Apr 13 19:27:14.899393 sshd[1785]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:27:14.904701 systemd-logind[1586]: New session 6 of user core. Apr 13 19:27:14.909527 systemd[1]: Started session-6.scope - Session 6 of User core. Apr 13 19:27:14.998674 sudo[1790]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Apr 13 19:27:14.998983 sudo[1790]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 13 19:27:15.004083 sudo[1790]: pam_unix(sudo:session): session closed for user root Apr 13 19:27:15.011188 sudo[1789]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Apr 13 19:27:15.011493 sudo[1789]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 13 19:27:15.029655 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Apr 13 19:27:15.033240 auditctl[1793]: No rules Apr 13 19:27:15.033785 systemd[1]: audit-rules.service: Deactivated successfully. Apr 13 19:27:15.034078 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Apr 13 19:27:15.039100 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 13 19:27:15.084999 augenrules[1812]: No rules Apr 13 19:27:15.087457 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 13 19:27:15.090431 sudo[1789]: pam_unix(sudo:session): session closed for user root Apr 13 19:27:15.108349 sshd[1785]: pam_unix(sshd:session): session closed for user core Apr 13 19:27:15.115219 systemd-logind[1586]: Session 6 logged out. Waiting for processes to exit. Apr 13 19:27:15.116211 systemd[1]: sshd@5-49.13.63.18:22-50.85.169.122:52226.service: Deactivated successfully. Apr 13 19:27:15.119605 systemd[1]: session-6.scope: Deactivated successfully. Apr 13 19:27:15.123108 systemd-logind[1586]: Removed session 6. Apr 13 19:27:15.130418 systemd[1]: Started sshd@6-49.13.63.18:22-50.85.169.122:52230.service - OpenSSH per-connection server daemon (50.85.169.122:52230). Apr 13 19:27:15.243177 sshd[1821]: Accepted publickey for core from 50.85.169.122 port 52230 ssh2: RSA SHA256:iZ69s7jdfZeZWl77uzTdj7kYKrt9+aDLOIz6i/Hnoms Apr 13 19:27:15.245068 sshd[1821]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:27:15.250223 systemd-logind[1586]: New session 7 of user core. Apr 13 19:27:15.256512 systemd[1]: Started session-7.scope - Session 7 of User core. Apr 13 19:27:15.345323 sudo[1825]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 13 19:27:15.345618 sudo[1825]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 13 19:27:15.655473 systemd[1]: Starting docker.service - Docker Application Container Engine... Apr 13 19:27:15.656479 (dockerd)[1840]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Apr 13 19:27:15.897984 dockerd[1840]: time="2026-04-13T19:27:15.897635276Z" level=info msg="Starting up" Apr 13 19:27:15.979860 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport1804583710-merged.mount: Deactivated successfully. Apr 13 19:27:16.092526 dockerd[1840]: time="2026-04-13T19:27:16.092472980Z" level=info msg="Loading containers: start." Apr 13 19:27:16.197991 kernel: Initializing XFRM netlink socket Apr 13 19:27:16.286110 systemd-networkd[1251]: docker0: Link UP Apr 13 19:27:16.315845 dockerd[1840]: time="2026-04-13T19:27:16.315781993Z" level=info msg="Loading containers: done." Apr 13 19:27:16.331459 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3340619662-merged.mount: Deactivated successfully. Apr 13 19:27:16.332925 dockerd[1840]: time="2026-04-13T19:27:16.332067426Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 13 19:27:16.332925 dockerd[1840]: time="2026-04-13T19:27:16.332184067Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Apr 13 19:27:16.332925 dockerd[1840]: time="2026-04-13T19:27:16.332335237Z" level=info msg="Daemon has completed initialization" Apr 13 19:27:16.379315 dockerd[1840]: time="2026-04-13T19:27:16.379189465Z" level=info msg="API listen on /run/docker.sock" Apr 13 19:27:16.379453 systemd[1]: Started docker.service - Docker Application Container Engine. Apr 13 19:27:16.892698 containerd[1620]: time="2026-04-13T19:27:16.892401401Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.10\"" Apr 13 19:27:17.425479 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2894693578.mount: Deactivated successfully. Apr 13 19:27:18.544118 containerd[1620]: time="2026-04-13T19:27:18.544010636Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:27:18.547491 containerd[1620]: time="2026-04-13T19:27:18.547435377Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.10: active requests=0, bytes read=27283781" Apr 13 19:27:18.549230 containerd[1620]: time="2026-04-13T19:27:18.549168900Z" level=info msg="ImageCreate event name:\"sha256:1edd049f11c0655b7dbb2b22afe15b8f3118f2780a0997762857ad3baee29c03\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:27:18.553160 containerd[1620]: time="2026-04-13T19:27:18.553079975Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:bbff81e41af4bfca88a1d05a066a48e12e2689c534d073a8c688e3ad6c8701e3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:27:18.555570 containerd[1620]: time="2026-04-13T19:27:18.555499704Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.10\" with image id \"sha256:1edd049f11c0655b7dbb2b22afe15b8f3118f2780a0997762857ad3baee29c03\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:bbff81e41af4bfca88a1d05a066a48e12e2689c534d073a8c688e3ad6c8701e3\", size \"27280282\" in 1.663055751s" Apr 13 19:27:18.555570 containerd[1620]: time="2026-04-13T19:27:18.555549794Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.10\" returns image reference \"sha256:1edd049f11c0655b7dbb2b22afe15b8f3118f2780a0997762857ad3baee29c03\"" Apr 13 19:27:18.556801 containerd[1620]: time="2026-04-13T19:27:18.556494835Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.10\"" Apr 13 19:27:19.717009 containerd[1620]: time="2026-04-13T19:27:19.716888208Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:27:19.718338 containerd[1620]: time="2026-04-13T19:27:19.718302932Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.10: active requests=0, bytes read=23551922" Apr 13 19:27:19.719573 containerd[1620]: time="2026-04-13T19:27:19.719103823Z" level=info msg="ImageCreate event name:\"sha256:f331204a7439939f31f8e98461868cd4acd177a47c806dfc1dfe17f7725b18c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:27:19.722661 containerd[1620]: time="2026-04-13T19:27:19.722620469Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:b0880d6ee19f2b9148d3d37008c5ee9fc73976e8edad4d0709f11d32ab3ee709\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:27:19.724055 containerd[1620]: time="2026-04-13T19:27:19.724017909Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.10\" with image id \"sha256:f331204a7439939f31f8e98461868cd4acd177a47c806dfc1dfe17f7725b18c2\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:b0880d6ee19f2b9148d3d37008c5ee9fc73976e8edad4d0709f11d32ab3ee709\", size \"25029924\" in 1.167444275s" Apr 13 19:27:19.724160 containerd[1620]: time="2026-04-13T19:27:19.724142924Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.10\" returns image reference \"sha256:f331204a7439939f31f8e98461868cd4acd177a47c806dfc1dfe17f7725b18c2\"" Apr 13 19:27:19.724673 containerd[1620]: time="2026-04-13T19:27:19.724582429Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.10\"" Apr 13 19:27:20.484025 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 13 19:27:20.495660 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 19:27:20.652605 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 19:27:20.654819 (kubelet)[2056]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 13 19:27:20.709240 kubelet[2056]: E0413 19:27:20.708852 2056 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 13 19:27:20.712622 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 13 19:27:20.712817 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 13 19:27:20.918124 containerd[1620]: time="2026-04-13T19:27:20.916267774Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:27:20.918787 containerd[1620]: time="2026-04-13T19:27:20.918675343Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.10: active requests=0, bytes read=18301253" Apr 13 19:27:20.919759 containerd[1620]: time="2026-04-13T19:27:20.919679355Z" level=info msg="ImageCreate event name:\"sha256:1dd8e26d7fcd4140e29ed9d408e8237c60ec560237440a99d64ccca50a7b10de\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:27:20.923737 containerd[1620]: time="2026-04-13T19:27:20.923685785Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:dc1a1aec3bb0ed126b1adff795935124f719969356b24a159fc1a2a0883b89bc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:27:20.925483 containerd[1620]: time="2026-04-13T19:27:20.925440140Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.10\" with image id \"sha256:1dd8e26d7fcd4140e29ed9d408e8237c60ec560237440a99d64ccca50a7b10de\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:dc1a1aec3bb0ed126b1adff795935124f719969356b24a159fc1a2a0883b89bc\", size \"19779273\" in 1.200663007s" Apr 13 19:27:20.925616 containerd[1620]: time="2026-04-13T19:27:20.925599257Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.10\" returns image reference \"sha256:1dd8e26d7fcd4140e29ed9d408e8237c60ec560237440a99d64ccca50a7b10de\"" Apr 13 19:27:20.926746 containerd[1620]: time="2026-04-13T19:27:20.926662642Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.10\"" Apr 13 19:27:21.835288 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1684408328.mount: Deactivated successfully. Apr 13 19:27:22.157530 containerd[1620]: time="2026-04-13T19:27:22.157354614Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:27:22.158994 containerd[1620]: time="2026-04-13T19:27:22.158742082Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.10: active requests=0, bytes read=28148979" Apr 13 19:27:22.159988 containerd[1620]: time="2026-04-13T19:27:22.159871348Z" level=info msg="ImageCreate event name:\"sha256:b1cf8dea216dd607b54b086906dc4c9d7b7272b82a517da6eab7e474a5286963\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:27:22.162305 containerd[1620]: time="2026-04-13T19:27:22.162245035Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:e8151e38ef22f032dba686cc1bba5a3e525dedbe2d549fa44e653fe79426e261\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:27:22.163276 containerd[1620]: time="2026-04-13T19:27:22.163122371Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.10\" with image id \"sha256:b1cf8dea216dd607b54b086906dc4c9d7b7272b82a517da6eab7e474a5286963\", repo tag \"registry.k8s.io/kube-proxy:v1.33.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:e8151e38ef22f032dba686cc1bba5a3e525dedbe2d549fa44e653fe79426e261\", size \"28147972\" in 1.236238236s" Apr 13 19:27:22.163276 containerd[1620]: time="2026-04-13T19:27:22.163156133Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.10\" returns image reference \"sha256:b1cf8dea216dd607b54b086906dc4c9d7b7272b82a517da6eab7e474a5286963\"" Apr 13 19:27:22.164056 containerd[1620]: time="2026-04-13T19:27:22.163829690Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Apr 13 19:27:22.685731 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount131641617.mount: Deactivated successfully. Apr 13 19:27:23.704010 containerd[1620]: time="2026-04-13T19:27:23.703925330Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:27:23.706589 containerd[1620]: time="2026-04-13T19:27:23.706511169Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=19152209" Apr 13 19:27:23.709220 containerd[1620]: time="2026-04-13T19:27:23.709159914Z" level=info msg="ImageCreate event name:\"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:27:23.713549 containerd[1620]: time="2026-04-13T19:27:23.713368179Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:27:23.715738 containerd[1620]: time="2026-04-13T19:27:23.715651345Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"19148915\" in 1.55179248s" Apr 13 19:27:23.715738 containerd[1620]: time="2026-04-13T19:27:23.715693764Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\"" Apr 13 19:27:23.716181 containerd[1620]: time="2026-04-13T19:27:23.716147995Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Apr 13 19:27:24.171922 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount862234395.mount: Deactivated successfully. Apr 13 19:27:24.177268 containerd[1620]: time="2026-04-13T19:27:24.176673372Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:27:24.177731 containerd[1620]: time="2026-04-13T19:27:24.177664984Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268723" Apr 13 19:27:24.178969 containerd[1620]: time="2026-04-13T19:27:24.178747491Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:27:24.181509 containerd[1620]: time="2026-04-13T19:27:24.181053907Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:27:24.182111 containerd[1620]: time="2026-04-13T19:27:24.182078280Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 465.89442ms" Apr 13 19:27:24.182111 containerd[1620]: time="2026-04-13T19:27:24.182110518Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Apr 13 19:27:24.182615 containerd[1620]: time="2026-04-13T19:27:24.182571015Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\"" Apr 13 19:27:24.696030 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2975657594.mount: Deactivated successfully. Apr 13 19:27:25.544047 containerd[1620]: time="2026-04-13T19:27:25.543383247Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.24-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:27:25.546002 containerd[1620]: time="2026-04-13T19:27:25.545897795Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.24-0: active requests=0, bytes read=21885878" Apr 13 19:27:25.547893 containerd[1620]: time="2026-04-13T19:27:25.547834279Z" level=info msg="ImageCreate event name:\"sha256:1211402d28f5813ed906916bfcdd0a7404c2f9048ef5bb54387a6745bc410eca\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:27:25.553971 containerd[1620]: time="2026-04-13T19:27:25.552604580Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:27:25.557301 containerd[1620]: time="2026-04-13T19:27:25.557261275Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.24-0\" with image id \"sha256:1211402d28f5813ed906916bfcdd0a7404c2f9048ef5bb54387a6745bc410eca\", repo tag \"registry.k8s.io/etcd:3.5.24-0\", repo digest \"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\", size \"21882972\" in 1.374651081s" Apr 13 19:27:25.557445 containerd[1620]: time="2026-04-13T19:27:25.557427652Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\" returns image reference \"sha256:1211402d28f5813ed906916bfcdd0a7404c2f9048ef5bb54387a6745bc410eca\"" Apr 13 19:27:30.429783 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 19:27:30.437376 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 19:27:30.478395 systemd[1]: Reloading requested from client PID 2222 ('systemctl') (unit session-7.scope)... Apr 13 19:27:30.478541 systemd[1]: Reloading... Apr 13 19:27:30.576965 zram_generator::config[2259]: No configuration found. Apr 13 19:27:30.720211 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 13 19:27:30.788400 systemd[1]: Reloading finished in 309 ms. Apr 13 19:27:30.838532 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Apr 13 19:27:30.838877 systemd[1]: kubelet.service: Failed with result 'signal'. Apr 13 19:27:30.839280 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 19:27:30.847287 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 19:27:30.968209 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 19:27:30.976305 (kubelet)[2320]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 13 19:27:31.014728 kubelet[2320]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 13 19:27:31.015111 kubelet[2320]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 13 19:27:31.015154 kubelet[2320]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 13 19:27:31.015293 kubelet[2320]: I0413 19:27:31.015263 2320 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 13 19:27:31.713064 kubelet[2320]: I0413 19:27:31.713021 2320 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Apr 13 19:27:31.713262 kubelet[2320]: I0413 19:27:31.713247 2320 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 13 19:27:31.713739 kubelet[2320]: I0413 19:27:31.713712 2320 server.go:956] "Client rotation is on, will bootstrap in background" Apr 13 19:27:31.743248 kubelet[2320]: E0413 19:27:31.743203 2320 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://49.13.63.18:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 49.13.63.18:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 13 19:27:31.743476 kubelet[2320]: I0413 19:27:31.743441 2320 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 13 19:27:31.752704 kubelet[2320]: E0413 19:27:31.752644 2320 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 13 19:27:31.752704 kubelet[2320]: I0413 19:27:31.752701 2320 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Apr 13 19:27:31.756400 kubelet[2320]: I0413 19:27:31.756360 2320 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 13 19:27:31.759052 kubelet[2320]: I0413 19:27:31.758994 2320 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 13 19:27:31.759402 kubelet[2320]: I0413 19:27:31.759048 2320 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-7-8-01d4258341","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Apr 13 19:27:31.759402 kubelet[2320]: I0413 19:27:31.759395 2320 topology_manager.go:138] "Creating topology manager with none policy" Apr 13 19:27:31.759512 kubelet[2320]: I0413 19:27:31.759407 2320 container_manager_linux.go:303] "Creating device plugin manager" Apr 13 19:27:31.759642 kubelet[2320]: I0413 19:27:31.759610 2320 state_mem.go:36] "Initialized new in-memory state store" Apr 13 19:27:31.763377 kubelet[2320]: I0413 19:27:31.763199 2320 kubelet.go:480] "Attempting to sync node with API server" Apr 13 19:27:31.763377 kubelet[2320]: I0413 19:27:31.763239 2320 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 13 19:27:31.763377 kubelet[2320]: I0413 19:27:31.763268 2320 kubelet.go:386] "Adding apiserver pod source" Apr 13 19:27:31.763377 kubelet[2320]: I0413 19:27:31.763281 2320 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 13 19:27:31.771343 kubelet[2320]: E0413 19:27:31.770745 2320 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://49.13.63.18:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-7-8-01d4258341&limit=500&resourceVersion=0\": dial tcp 49.13.63.18:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 13 19:27:31.773344 kubelet[2320]: E0413 19:27:31.773290 2320 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://49.13.63.18:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 49.13.63.18:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 13 19:27:31.773693 kubelet[2320]: I0413 19:27:31.773665 2320 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 13 19:27:31.775980 kubelet[2320]: I0413 19:27:31.775789 2320 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 13 19:27:31.776069 kubelet[2320]: W0413 19:27:31.776015 2320 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 13 19:27:31.779242 kubelet[2320]: I0413 19:27:31.779219 2320 watchdog_linux.go:99] "Systemd watchdog is not enabled" Apr 13 19:27:31.779335 kubelet[2320]: I0413 19:27:31.779267 2320 server.go:1289] "Started kubelet" Apr 13 19:27:31.780931 kubelet[2320]: I0413 19:27:31.779411 2320 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 13 19:27:31.780931 kubelet[2320]: I0413 19:27:31.780334 2320 server.go:317] "Adding debug handlers to kubelet server" Apr 13 19:27:31.781628 kubelet[2320]: I0413 19:27:31.781573 2320 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 13 19:27:31.781962 kubelet[2320]: I0413 19:27:31.781918 2320 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 13 19:27:31.783819 kubelet[2320]: E0413 19:27:31.782549 2320 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://49.13.63.18:6443/api/v1/namespaces/default/events\": dial tcp 49.13.63.18:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-3-7-8-01d4258341.18a6013a06ded410 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-7-8-01d4258341,UID:ci-4081-3-7-8-01d4258341,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-3-7-8-01d4258341,},FirstTimestamp:2026-04-13 19:27:31.77923688 +0000 UTC m=+0.798864269,LastTimestamp:2026-04-13 19:27:31.77923688 +0000 UTC m=+0.798864269,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-7-8-01d4258341,}" Apr 13 19:27:31.786140 kubelet[2320]: I0413 19:27:31.786117 2320 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 13 19:27:31.786413 kubelet[2320]: E0413 19:27:31.786390 2320 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 13 19:27:31.786674 kubelet[2320]: I0413 19:27:31.786650 2320 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 13 19:27:31.789259 kubelet[2320]: E0413 19:27:31.789024 2320 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-7-8-01d4258341\" not found" Apr 13 19:27:31.789259 kubelet[2320]: I0413 19:27:31.789109 2320 volume_manager.go:297] "Starting Kubelet Volume Manager" Apr 13 19:27:31.790282 kubelet[2320]: I0413 19:27:31.789642 2320 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Apr 13 19:27:31.790282 kubelet[2320]: I0413 19:27:31.789708 2320 reconciler.go:26] "Reconciler: start to sync state" Apr 13 19:27:31.790446 kubelet[2320]: I0413 19:27:31.790416 2320 factory.go:223] Registration of the systemd container factory successfully Apr 13 19:27:31.790535 kubelet[2320]: I0413 19:27:31.790511 2320 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 13 19:27:31.790773 kubelet[2320]: E0413 19:27:31.790710 2320 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://49.13.63.18:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 49.13.63.18:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 13 19:27:31.792250 kubelet[2320]: I0413 19:27:31.792092 2320 factory.go:223] Registration of the containerd container factory successfully Apr 13 19:27:31.794080 kubelet[2320]: I0413 19:27:31.794048 2320 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Apr 13 19:27:31.811848 kubelet[2320]: E0413 19:27:31.811801 2320 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://49.13.63.18:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-7-8-01d4258341?timeout=10s\": dial tcp 49.13.63.18:6443: connect: connection refused" interval="200ms" Apr 13 19:27:31.818321 kubelet[2320]: I0413 19:27:31.817745 2320 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 13 19:27:31.818321 kubelet[2320]: I0413 19:27:31.817763 2320 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 13 19:27:31.818321 kubelet[2320]: I0413 19:27:31.817791 2320 state_mem.go:36] "Initialized new in-memory state store" Apr 13 19:27:31.822013 kubelet[2320]: I0413 19:27:31.821927 2320 policy_none.go:49] "None policy: Start" Apr 13 19:27:31.822013 kubelet[2320]: I0413 19:27:31.821983 2320 memory_manager.go:186] "Starting memorymanager" policy="None" Apr 13 19:27:31.822013 kubelet[2320]: I0413 19:27:31.822005 2320 state_mem.go:35] "Initializing new in-memory state store" Apr 13 19:27:31.833345 kubelet[2320]: I0413 19:27:31.833311 2320 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Apr 13 19:27:31.834255 kubelet[2320]: I0413 19:27:31.834145 2320 status_manager.go:230] "Starting to sync pod status with apiserver" Apr 13 19:27:31.834255 kubelet[2320]: I0413 19:27:31.834179 2320 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 13 19:27:31.834255 kubelet[2320]: I0413 19:27:31.834187 2320 kubelet.go:2436] "Starting kubelet main sync loop" Apr 13 19:27:31.834255 kubelet[2320]: E0413 19:27:31.834224 2320 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 13 19:27:31.835688 kubelet[2320]: E0413 19:27:31.833536 2320 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 13 19:27:31.835688 kubelet[2320]: I0413 19:27:31.835191 2320 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 13 19:27:31.835688 kubelet[2320]: I0413 19:27:31.835203 2320 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 13 19:27:31.835688 kubelet[2320]: I0413 19:27:31.835479 2320 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 13 19:27:31.836158 kubelet[2320]: E0413 19:27:31.836131 2320 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://49.13.63.18:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 49.13.63.18:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 13 19:27:31.837175 kubelet[2320]: E0413 19:27:31.837155 2320 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 13 19:27:31.837339 kubelet[2320]: E0413 19:27:31.837319 2320 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081-3-7-8-01d4258341\" not found" Apr 13 19:27:31.937907 kubelet[2320]: I0413 19:27:31.937851 2320 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-7-8-01d4258341" Apr 13 19:27:31.939115 kubelet[2320]: E0413 19:27:31.939051 2320 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://49.13.63.18:6443/api/v1/nodes\": dial tcp 49.13.63.18:6443: connect: connection refused" node="ci-4081-3-7-8-01d4258341" Apr 13 19:27:31.944149 kubelet[2320]: E0413 19:27:31.944109 2320 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-7-8-01d4258341\" not found" node="ci-4081-3-7-8-01d4258341" Apr 13 19:27:31.948297 kubelet[2320]: E0413 19:27:31.948241 2320 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-7-8-01d4258341\" not found" node="ci-4081-3-7-8-01d4258341" Apr 13 19:27:31.952501 kubelet[2320]: E0413 19:27:31.952473 2320 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-7-8-01d4258341\" not found" node="ci-4081-3-7-8-01d4258341" Apr 13 19:27:32.013465 kubelet[2320]: E0413 19:27:32.013285 2320 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://49.13.63.18:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-7-8-01d4258341?timeout=10s\": dial tcp 49.13.63.18:6443: connect: connection refused" interval="400ms" Apr 13 19:27:32.090724 kubelet[2320]: I0413 19:27:32.090614 2320 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/89edd7625761ef4a2e50aa965ce7fb7f-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-7-8-01d4258341\" (UID: \"89edd7625761ef4a2e50aa965ce7fb7f\") " pod="kube-system/kube-apiserver-ci-4081-3-7-8-01d4258341" Apr 13 19:27:32.090724 kubelet[2320]: I0413 19:27:32.090699 2320 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/72cea3ba037af872486e65ccbd133950-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-7-8-01d4258341\" (UID: \"72cea3ba037af872486e65ccbd133950\") " pod="kube-system/kube-controller-manager-ci-4081-3-7-8-01d4258341" Apr 13 19:27:32.090724 kubelet[2320]: I0413 19:27:32.090738 2320 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/72cea3ba037af872486e65ccbd133950-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-7-8-01d4258341\" (UID: \"72cea3ba037af872486e65ccbd133950\") " pod="kube-system/kube-controller-manager-ci-4081-3-7-8-01d4258341" Apr 13 19:27:32.091721 kubelet[2320]: I0413 19:27:32.090778 2320 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/72cea3ba037af872486e65ccbd133950-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-7-8-01d4258341\" (UID: \"72cea3ba037af872486e65ccbd133950\") " pod="kube-system/kube-controller-manager-ci-4081-3-7-8-01d4258341" Apr 13 19:27:32.091721 kubelet[2320]: I0413 19:27:32.090818 2320 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/72cea3ba037af872486e65ccbd133950-ca-certs\") pod \"kube-controller-manager-ci-4081-3-7-8-01d4258341\" (UID: \"72cea3ba037af872486e65ccbd133950\") " pod="kube-system/kube-controller-manager-ci-4081-3-7-8-01d4258341" Apr 13 19:27:32.091721 kubelet[2320]: I0413 19:27:32.090857 2320 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/72cea3ba037af872486e65ccbd133950-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-7-8-01d4258341\" (UID: \"72cea3ba037af872486e65ccbd133950\") " pod="kube-system/kube-controller-manager-ci-4081-3-7-8-01d4258341" Apr 13 19:27:32.091721 kubelet[2320]: I0413 19:27:32.090926 2320 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4a30103ac8e9374a33991d2349a96f03-kubeconfig\") pod \"kube-scheduler-ci-4081-3-7-8-01d4258341\" (UID: \"4a30103ac8e9374a33991d2349a96f03\") " pod="kube-system/kube-scheduler-ci-4081-3-7-8-01d4258341" Apr 13 19:27:32.091721 kubelet[2320]: I0413 19:27:32.090994 2320 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/89edd7625761ef4a2e50aa965ce7fb7f-ca-certs\") pod \"kube-apiserver-ci-4081-3-7-8-01d4258341\" (UID: \"89edd7625761ef4a2e50aa965ce7fb7f\") " pod="kube-system/kube-apiserver-ci-4081-3-7-8-01d4258341" Apr 13 19:27:32.092235 kubelet[2320]: I0413 19:27:32.091038 2320 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/89edd7625761ef4a2e50aa965ce7fb7f-k8s-certs\") pod \"kube-apiserver-ci-4081-3-7-8-01d4258341\" (UID: \"89edd7625761ef4a2e50aa965ce7fb7f\") " pod="kube-system/kube-apiserver-ci-4081-3-7-8-01d4258341" Apr 13 19:27:32.142252 kubelet[2320]: I0413 19:27:32.142130 2320 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-7-8-01d4258341" Apr 13 19:27:32.142923 kubelet[2320]: E0413 19:27:32.142849 2320 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://49.13.63.18:6443/api/v1/nodes\": dial tcp 49.13.63.18:6443: connect: connection refused" node="ci-4081-3-7-8-01d4258341" Apr 13 19:27:32.246132 containerd[1620]: time="2026-04-13T19:27:32.246022274Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-7-8-01d4258341,Uid:89edd7625761ef4a2e50aa965ce7fb7f,Namespace:kube-system,Attempt:0,}" Apr 13 19:27:32.250812 containerd[1620]: time="2026-04-13T19:27:32.250249597Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-7-8-01d4258341,Uid:72cea3ba037af872486e65ccbd133950,Namespace:kube-system,Attempt:0,}" Apr 13 19:27:32.254627 containerd[1620]: time="2026-04-13T19:27:32.254574204Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-7-8-01d4258341,Uid:4a30103ac8e9374a33991d2349a96f03,Namespace:kube-system,Attempt:0,}" Apr 13 19:27:32.414856 kubelet[2320]: E0413 19:27:32.414816 2320 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://49.13.63.18:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-7-8-01d4258341?timeout=10s\": dial tcp 49.13.63.18:6443: connect: connection refused" interval="800ms" Apr 13 19:27:32.547904 kubelet[2320]: I0413 19:27:32.547395 2320 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-7-8-01d4258341" Apr 13 19:27:32.547904 kubelet[2320]: E0413 19:27:32.547791 2320 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://49.13.63.18:6443/api/v1/nodes\": dial tcp 49.13.63.18:6443: connect: connection refused" node="ci-4081-3-7-8-01d4258341" Apr 13 19:27:32.704437 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount44018059.mount: Deactivated successfully. Apr 13 19:27:32.708691 kubelet[2320]: E0413 19:27:32.708654 2320 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://49.13.63.18:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-7-8-01d4258341&limit=500&resourceVersion=0\": dial tcp 49.13.63.18:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 13 19:27:32.712816 containerd[1620]: time="2026-04-13T19:27:32.712763107Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 13 19:27:32.713867 containerd[1620]: time="2026-04-13T19:27:32.713831662Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 13 19:27:32.715143 containerd[1620]: time="2026-04-13T19:27:32.715047124Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 13 19:27:32.715307 containerd[1620]: time="2026-04-13T19:27:32.715285587Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 13 19:27:32.716028 containerd[1620]: time="2026-04-13T19:27:32.715997090Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 13 19:27:32.716967 containerd[1620]: time="2026-04-13T19:27:32.716916456Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 13 19:27:32.717755 containerd[1620]: time="2026-04-13T19:27:32.717699970Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269193" Apr 13 19:27:32.720610 containerd[1620]: time="2026-04-13T19:27:32.720550907Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 13 19:27:32.722855 containerd[1620]: time="2026-04-13T19:27:32.722599266Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 472.259794ms" Apr 13 19:27:32.724175 containerd[1620]: time="2026-04-13T19:27:32.723913173Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 477.786887ms" Apr 13 19:27:32.727991 containerd[1620]: time="2026-04-13T19:27:32.727890659Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 473.207116ms" Apr 13 19:27:32.854988 containerd[1620]: time="2026-04-13T19:27:32.854429118Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 19:27:32.854988 containerd[1620]: time="2026-04-13T19:27:32.854536654Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 19:27:32.854988 containerd[1620]: time="2026-04-13T19:27:32.854571418Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:27:32.854988 containerd[1620]: time="2026-04-13T19:27:32.854722810Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:27:32.860321 containerd[1620]: time="2026-04-13T19:27:32.860153741Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 19:27:32.860582 containerd[1620]: time="2026-04-13T19:27:32.860292397Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 19:27:32.860678 containerd[1620]: time="2026-04-13T19:27:32.859555862Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 19:27:32.860872 containerd[1620]: time="2026-04-13T19:27:32.860764596Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:27:32.861037 containerd[1620]: time="2026-04-13T19:27:32.861004460Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:27:32.862109 containerd[1620]: time="2026-04-13T19:27:32.862060079Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 19:27:32.862314 containerd[1620]: time="2026-04-13T19:27:32.862230535Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:27:32.862539 containerd[1620]: time="2026-04-13T19:27:32.862505004Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:27:32.936770 containerd[1620]: time="2026-04-13T19:27:32.936479495Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-7-8-01d4258341,Uid:4a30103ac8e9374a33991d2349a96f03,Namespace:kube-system,Attempt:0,} returns sandbox id \"57fda3b9c98006334080e1bc22ddd4f9066d08bbb1fa434d7c4c2e9fde21ca54\"" Apr 13 19:27:32.944004 containerd[1620]: time="2026-04-13T19:27:32.943961788Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-7-8-01d4258341,Uid:89edd7625761ef4a2e50aa965ce7fb7f,Namespace:kube-system,Attempt:0,} returns sandbox id \"ce29a0682ea4a2417a731a745bd8a124f5903ff5c9a7b5cebcd1ef4b51585615\"" Apr 13 19:27:32.945433 containerd[1620]: time="2026-04-13T19:27:32.945303210Z" level=info msg="CreateContainer within sandbox \"57fda3b9c98006334080e1bc22ddd4f9066d08bbb1fa434d7c4c2e9fde21ca54\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Apr 13 19:27:32.946988 containerd[1620]: time="2026-04-13T19:27:32.946856700Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-7-8-01d4258341,Uid:72cea3ba037af872486e65ccbd133950,Namespace:kube-system,Attempt:0,} returns sandbox id \"aa87fc31d88b1a43b8aa84cb121a5a70d20257497ef54d6c836437ef8441203a\"" Apr 13 19:27:32.950065 containerd[1620]: time="2026-04-13T19:27:32.950020474Z" level=info msg="CreateContainer within sandbox \"ce29a0682ea4a2417a731a745bd8a124f5903ff5c9a7b5cebcd1ef4b51585615\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Apr 13 19:27:32.952193 containerd[1620]: time="2026-04-13T19:27:32.952145610Z" level=info msg="CreateContainer within sandbox \"aa87fc31d88b1a43b8aa84cb121a5a70d20257497ef54d6c836437ef8441203a\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Apr 13 19:27:32.966464 containerd[1620]: time="2026-04-13T19:27:32.966351834Z" level=info msg="CreateContainer within sandbox \"57fda3b9c98006334080e1bc22ddd4f9066d08bbb1fa434d7c4c2e9fde21ca54\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"02db44b84dbcaba3e74d53542f3415ba2294aa660d8bb79a66e1c7a5c4820dfd\"" Apr 13 19:27:32.968574 containerd[1620]: time="2026-04-13T19:27:32.968541972Z" level=info msg="StartContainer for \"02db44b84dbcaba3e74d53542f3415ba2294aa660d8bb79a66e1c7a5c4820dfd\"" Apr 13 19:27:32.976681 containerd[1620]: time="2026-04-13T19:27:32.976634119Z" level=info msg="CreateContainer within sandbox \"aa87fc31d88b1a43b8aa84cb121a5a70d20257497ef54d6c836437ef8441203a\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"d1819cdfa16062a70b56e7c1cf1661b04acfce9783619495e9f043b1b18cbbea\"" Apr 13 19:27:32.977731 containerd[1620]: time="2026-04-13T19:27:32.977527212Z" level=info msg="StartContainer for \"d1819cdfa16062a70b56e7c1cf1661b04acfce9783619495e9f043b1b18cbbea\"" Apr 13 19:27:32.982149 containerd[1620]: time="2026-04-13T19:27:32.982104179Z" level=info msg="CreateContainer within sandbox \"ce29a0682ea4a2417a731a745bd8a124f5903ff5c9a7b5cebcd1ef4b51585615\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"0a11a8a804e0056daaab77060f7861d79d132fe90100f4ecbe62ce8c7b0371cb\"" Apr 13 19:27:32.983315 containerd[1620]: time="2026-04-13T19:27:32.983264170Z" level=info msg="StartContainer for \"0a11a8a804e0056daaab77060f7861d79d132fe90100f4ecbe62ce8c7b0371cb\"" Apr 13 19:27:33.065692 containerd[1620]: time="2026-04-13T19:27:33.065654006Z" level=info msg="StartContainer for \"02db44b84dbcaba3e74d53542f3415ba2294aa660d8bb79a66e1c7a5c4820dfd\" returns successfully" Apr 13 19:27:33.071576 kubelet[2320]: E0413 19:27:33.071174 2320 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://49.13.63.18:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 49.13.63.18:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 13 19:27:33.071928 containerd[1620]: time="2026-04-13T19:27:33.071811203Z" level=info msg="StartContainer for \"0a11a8a804e0056daaab77060f7861d79d132fe90100f4ecbe62ce8c7b0371cb\" returns successfully" Apr 13 19:27:33.132085 containerd[1620]: time="2026-04-13T19:27:33.131617004Z" level=info msg="StartContainer for \"d1819cdfa16062a70b56e7c1cf1661b04acfce9783619495e9f043b1b18cbbea\" returns successfully" Apr 13 19:27:33.351033 kubelet[2320]: I0413 19:27:33.349618 2320 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-7-8-01d4258341" Apr 13 19:27:33.855324 kubelet[2320]: E0413 19:27:33.855052 2320 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-7-8-01d4258341\" not found" node="ci-4081-3-7-8-01d4258341" Apr 13 19:27:33.859489 kubelet[2320]: E0413 19:27:33.859449 2320 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-7-8-01d4258341\" not found" node="ci-4081-3-7-8-01d4258341" Apr 13 19:27:33.864499 kubelet[2320]: E0413 19:27:33.863876 2320 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-7-8-01d4258341\" not found" node="ci-4081-3-7-8-01d4258341" Apr 13 19:27:34.867977 kubelet[2320]: E0413 19:27:34.866329 2320 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-7-8-01d4258341\" not found" node="ci-4081-3-7-8-01d4258341" Apr 13 19:27:34.867977 kubelet[2320]: E0413 19:27:34.866572 2320 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-7-8-01d4258341\" not found" node="ci-4081-3-7-8-01d4258341" Apr 13 19:27:35.011829 kubelet[2320]: E0413 19:27:35.011788 2320 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081-3-7-8-01d4258341\" not found" node="ci-4081-3-7-8-01d4258341" Apr 13 19:27:35.138723 kubelet[2320]: I0413 19:27:35.138592 2320 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081-3-7-8-01d4258341" Apr 13 19:27:35.199077 kubelet[2320]: I0413 19:27:35.199003 2320 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-7-8-01d4258341" Apr 13 19:27:35.234684 kubelet[2320]: E0413 19:27:35.234635 2320 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081-3-7-8-01d4258341\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4081-3-7-8-01d4258341" Apr 13 19:27:35.234684 kubelet[2320]: I0413 19:27:35.234683 2320 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-7-8-01d4258341" Apr 13 19:27:35.244771 kubelet[2320]: E0413 19:27:35.244728 2320 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081-3-7-8-01d4258341\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4081-3-7-8-01d4258341" Apr 13 19:27:35.244771 kubelet[2320]: I0413 19:27:35.244771 2320 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081-3-7-8-01d4258341" Apr 13 19:27:35.254943 kubelet[2320]: E0413 19:27:35.253124 2320 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081-3-7-8-01d4258341\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4081-3-7-8-01d4258341" Apr 13 19:27:35.773688 kubelet[2320]: I0413 19:27:35.773359 2320 apiserver.go:52] "Watching apiserver" Apr 13 19:27:35.790589 kubelet[2320]: I0413 19:27:35.790560 2320 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Apr 13 19:27:35.928826 kubelet[2320]: I0413 19:27:35.928766 2320 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-7-8-01d4258341" Apr 13 19:27:37.437107 systemd[1]: Reloading requested from client PID 2603 ('systemctl') (unit session-7.scope)... Apr 13 19:27:37.437129 systemd[1]: Reloading... Apr 13 19:27:37.532969 zram_generator::config[2643]: No configuration found. Apr 13 19:27:37.664311 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 13 19:27:37.739469 systemd[1]: Reloading finished in 301 ms. Apr 13 19:27:37.779609 kubelet[2320]: I0413 19:27:37.778975 2320 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 13 19:27:37.779220 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 19:27:37.794289 systemd[1]: kubelet.service: Deactivated successfully. Apr 13 19:27:37.795041 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 19:27:37.804646 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 19:27:37.933963 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 19:27:37.947609 (kubelet)[2698]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 13 19:27:38.012065 kubelet[2698]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 13 19:27:38.012065 kubelet[2698]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 13 19:27:38.012065 kubelet[2698]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 13 19:27:38.012065 kubelet[2698]: I0413 19:27:38.011745 2698 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 13 19:27:38.019293 kubelet[2698]: I0413 19:27:38.019237 2698 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Apr 13 19:27:38.019293 kubelet[2698]: I0413 19:27:38.019272 2698 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 13 19:27:38.021354 kubelet[2698]: I0413 19:27:38.021305 2698 server.go:956] "Client rotation is on, will bootstrap in background" Apr 13 19:27:38.024472 kubelet[2698]: I0413 19:27:38.024436 2698 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Apr 13 19:27:38.027291 kubelet[2698]: I0413 19:27:38.027038 2698 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 13 19:27:38.038572 kubelet[2698]: E0413 19:27:38.038253 2698 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 13 19:27:38.038572 kubelet[2698]: I0413 19:27:38.038290 2698 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Apr 13 19:27:38.041188 kubelet[2698]: I0413 19:27:38.041145 2698 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 13 19:27:38.042422 kubelet[2698]: I0413 19:27:38.042370 2698 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 13 19:27:38.042806 kubelet[2698]: I0413 19:27:38.042431 2698 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-7-8-01d4258341","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Apr 13 19:27:38.042904 kubelet[2698]: I0413 19:27:38.042827 2698 topology_manager.go:138] "Creating topology manager with none policy" Apr 13 19:27:38.042904 kubelet[2698]: I0413 19:27:38.042847 2698 container_manager_linux.go:303] "Creating device plugin manager" Apr 13 19:27:38.042983 kubelet[2698]: I0413 19:27:38.042972 2698 state_mem.go:36] "Initialized new in-memory state store" Apr 13 19:27:38.043391 kubelet[2698]: I0413 19:27:38.043372 2698 kubelet.go:480] "Attempting to sync node with API server" Apr 13 19:27:38.044094 kubelet[2698]: I0413 19:27:38.043402 2698 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 13 19:27:38.044094 kubelet[2698]: I0413 19:27:38.043444 2698 kubelet.go:386] "Adding apiserver pod source" Apr 13 19:27:38.044094 kubelet[2698]: I0413 19:27:38.043468 2698 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 13 19:27:38.045307 kubelet[2698]: I0413 19:27:38.045286 2698 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 13 19:27:38.046512 kubelet[2698]: I0413 19:27:38.046487 2698 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 13 19:27:38.049330 kubelet[2698]: I0413 19:27:38.049265 2698 watchdog_linux.go:99] "Systemd watchdog is not enabled" Apr 13 19:27:38.049463 kubelet[2698]: I0413 19:27:38.049450 2698 server.go:1289] "Started kubelet" Apr 13 19:27:38.051562 kubelet[2698]: I0413 19:27:38.051536 2698 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 13 19:27:38.062870 kubelet[2698]: I0413 19:27:38.062784 2698 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 13 19:27:38.064190 kubelet[2698]: I0413 19:27:38.064171 2698 server.go:317] "Adding debug handlers to kubelet server" Apr 13 19:27:38.071745 kubelet[2698]: I0413 19:27:38.070535 2698 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 13 19:27:38.075516 kubelet[2698]: I0413 19:27:38.072907 2698 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 13 19:27:38.075516 kubelet[2698]: I0413 19:27:38.072440 2698 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 13 19:27:38.075516 kubelet[2698]: I0413 19:27:38.073081 2698 volume_manager.go:297] "Starting Kubelet Volume Manager" Apr 13 19:27:38.083430 kubelet[2698]: I0413 19:27:38.083400 2698 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Apr 13 19:27:38.083752 kubelet[2698]: I0413 19:27:38.083735 2698 reconciler.go:26] "Reconciler: start to sync state" Apr 13 19:27:38.087260 kubelet[2698]: I0413 19:27:38.087232 2698 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Apr 13 19:27:38.100847 kubelet[2698]: I0413 19:27:38.100784 2698 factory.go:223] Registration of the systemd container factory successfully Apr 13 19:27:38.101137 kubelet[2698]: I0413 19:27:38.100904 2698 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 13 19:27:38.104004 kubelet[2698]: E0413 19:27:38.103475 2698 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 13 19:27:38.104004 kubelet[2698]: I0413 19:27:38.103543 2698 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Apr 13 19:27:38.104004 kubelet[2698]: I0413 19:27:38.103560 2698 status_manager.go:230] "Starting to sync pod status with apiserver" Apr 13 19:27:38.104004 kubelet[2698]: I0413 19:27:38.103578 2698 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 13 19:27:38.104004 kubelet[2698]: I0413 19:27:38.103584 2698 kubelet.go:2436] "Starting kubelet main sync loop" Apr 13 19:27:38.104004 kubelet[2698]: E0413 19:27:38.103624 2698 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 13 19:27:38.106995 kubelet[2698]: I0413 19:27:38.106956 2698 factory.go:223] Registration of the containerd container factory successfully Apr 13 19:27:38.169112 kubelet[2698]: I0413 19:27:38.169082 2698 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 13 19:27:38.169112 kubelet[2698]: I0413 19:27:38.169105 2698 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 13 19:27:38.169407 kubelet[2698]: I0413 19:27:38.169143 2698 state_mem.go:36] "Initialized new in-memory state store" Apr 13 19:27:38.170322 kubelet[2698]: I0413 19:27:38.169636 2698 state_mem.go:88] "Updated default CPUSet" cpuSet="" Apr 13 19:27:38.170322 kubelet[2698]: I0413 19:27:38.169651 2698 state_mem.go:96] "Updated CPUSet assignments" assignments={} Apr 13 19:27:38.170322 kubelet[2698]: I0413 19:27:38.169677 2698 policy_none.go:49] "None policy: Start" Apr 13 19:27:38.170322 kubelet[2698]: I0413 19:27:38.169687 2698 memory_manager.go:186] "Starting memorymanager" policy="None" Apr 13 19:27:38.170322 kubelet[2698]: I0413 19:27:38.169712 2698 state_mem.go:35] "Initializing new in-memory state store" Apr 13 19:27:38.170322 kubelet[2698]: I0413 19:27:38.170147 2698 state_mem.go:75] "Updated machine memory state" Apr 13 19:27:38.171509 kubelet[2698]: E0413 19:27:38.171488 2698 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 13 19:27:38.172416 kubelet[2698]: I0413 19:27:38.171654 2698 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 13 19:27:38.172416 kubelet[2698]: I0413 19:27:38.171675 2698 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 13 19:27:38.174028 kubelet[2698]: I0413 19:27:38.173192 2698 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 13 19:27:38.175837 kubelet[2698]: E0413 19:27:38.175815 2698 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 13 19:27:38.205694 kubelet[2698]: I0413 19:27:38.205619 2698 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-7-8-01d4258341" Apr 13 19:27:38.207842 kubelet[2698]: I0413 19:27:38.206316 2698 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-7-8-01d4258341" Apr 13 19:27:38.207842 kubelet[2698]: I0413 19:27:38.207329 2698 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081-3-7-8-01d4258341" Apr 13 19:27:38.217337 kubelet[2698]: E0413 19:27:38.217289 2698 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081-3-7-8-01d4258341\" already exists" pod="kube-system/kube-scheduler-ci-4081-3-7-8-01d4258341" Apr 13 19:27:38.281518 kubelet[2698]: I0413 19:27:38.281094 2698 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-7-8-01d4258341" Apr 13 19:27:38.285726 kubelet[2698]: I0413 19:27:38.285314 2698 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/89edd7625761ef4a2e50aa965ce7fb7f-ca-certs\") pod \"kube-apiserver-ci-4081-3-7-8-01d4258341\" (UID: \"89edd7625761ef4a2e50aa965ce7fb7f\") " pod="kube-system/kube-apiserver-ci-4081-3-7-8-01d4258341" Apr 13 19:27:38.285726 kubelet[2698]: I0413 19:27:38.285350 2698 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/89edd7625761ef4a2e50aa965ce7fb7f-k8s-certs\") pod \"kube-apiserver-ci-4081-3-7-8-01d4258341\" (UID: \"89edd7625761ef4a2e50aa965ce7fb7f\") " pod="kube-system/kube-apiserver-ci-4081-3-7-8-01d4258341" Apr 13 19:27:38.285726 kubelet[2698]: I0413 19:27:38.285376 2698 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/89edd7625761ef4a2e50aa965ce7fb7f-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-7-8-01d4258341\" (UID: \"89edd7625761ef4a2e50aa965ce7fb7f\") " pod="kube-system/kube-apiserver-ci-4081-3-7-8-01d4258341" Apr 13 19:27:38.285726 kubelet[2698]: I0413 19:27:38.285464 2698 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/72cea3ba037af872486e65ccbd133950-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-7-8-01d4258341\" (UID: \"72cea3ba037af872486e65ccbd133950\") " pod="kube-system/kube-controller-manager-ci-4081-3-7-8-01d4258341" Apr 13 19:27:38.285726 kubelet[2698]: I0413 19:27:38.285487 2698 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/72cea3ba037af872486e65ccbd133950-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-7-8-01d4258341\" (UID: \"72cea3ba037af872486e65ccbd133950\") " pod="kube-system/kube-controller-manager-ci-4081-3-7-8-01d4258341" Apr 13 19:27:38.286083 kubelet[2698]: I0413 19:27:38.285504 2698 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4a30103ac8e9374a33991d2349a96f03-kubeconfig\") pod \"kube-scheduler-ci-4081-3-7-8-01d4258341\" (UID: \"4a30103ac8e9374a33991d2349a96f03\") " pod="kube-system/kube-scheduler-ci-4081-3-7-8-01d4258341" Apr 13 19:27:38.286083 kubelet[2698]: I0413 19:27:38.285525 2698 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/72cea3ba037af872486e65ccbd133950-ca-certs\") pod \"kube-controller-manager-ci-4081-3-7-8-01d4258341\" (UID: \"72cea3ba037af872486e65ccbd133950\") " pod="kube-system/kube-controller-manager-ci-4081-3-7-8-01d4258341" Apr 13 19:27:38.286083 kubelet[2698]: I0413 19:27:38.285540 2698 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/72cea3ba037af872486e65ccbd133950-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-7-8-01d4258341\" (UID: \"72cea3ba037af872486e65ccbd133950\") " pod="kube-system/kube-controller-manager-ci-4081-3-7-8-01d4258341" Apr 13 19:27:38.286083 kubelet[2698]: I0413 19:27:38.285558 2698 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/72cea3ba037af872486e65ccbd133950-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-7-8-01d4258341\" (UID: \"72cea3ba037af872486e65ccbd133950\") " pod="kube-system/kube-controller-manager-ci-4081-3-7-8-01d4258341" Apr 13 19:27:38.293618 kubelet[2698]: I0413 19:27:38.293177 2698 kubelet_node_status.go:124] "Node was previously registered" node="ci-4081-3-7-8-01d4258341" Apr 13 19:27:38.293618 kubelet[2698]: I0413 19:27:38.293266 2698 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081-3-7-8-01d4258341" Apr 13 19:27:39.054838 kubelet[2698]: I0413 19:27:39.054766 2698 apiserver.go:52] "Watching apiserver" Apr 13 19:27:39.084563 kubelet[2698]: I0413 19:27:39.084521 2698 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Apr 13 19:27:39.140725 kubelet[2698]: I0413 19:27:39.139105 2698 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-7-8-01d4258341" Apr 13 19:27:39.150590 kubelet[2698]: E0413 19:27:39.150293 2698 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081-3-7-8-01d4258341\" already exists" pod="kube-system/kube-apiserver-ci-4081-3-7-8-01d4258341" Apr 13 19:27:39.165676 kubelet[2698]: I0413 19:27:39.165615 2698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081-3-7-8-01d4258341" podStartSLOduration=4.165600809 podStartE2EDuration="4.165600809s" podCreationTimestamp="2026-04-13 19:27:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-13 19:27:39.164388564 +0000 UTC m=+1.209715494" watchObservedRunningTime="2026-04-13 19:27:39.165600809 +0000 UTC m=+1.210927739" Apr 13 19:27:39.187244 kubelet[2698]: I0413 19:27:39.187028 2698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081-3-7-8-01d4258341" podStartSLOduration=1.1870092429999999 podStartE2EDuration="1.187009243s" podCreationTimestamp="2026-04-13 19:27:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-13 19:27:39.177016981 +0000 UTC m=+1.222343911" watchObservedRunningTime="2026-04-13 19:27:39.187009243 +0000 UTC m=+1.232336173" Apr 13 19:27:39.199240 kubelet[2698]: I0413 19:27:39.199165 2698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081-3-7-8-01d4258341" podStartSLOduration=1.199144494 podStartE2EDuration="1.199144494s" podCreationTimestamp="2026-04-13 19:27:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-13 19:27:39.187157557 +0000 UTC m=+1.232484487" watchObservedRunningTime="2026-04-13 19:27:39.199144494 +0000 UTC m=+1.244471464" Apr 13 19:27:42.062930 kubelet[2698]: I0413 19:27:42.062882 2698 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Apr 13 19:27:42.064176 containerd[1620]: time="2026-04-13T19:27:42.064018425Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 13 19:27:42.064882 kubelet[2698]: I0413 19:27:42.064286 2698 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Apr 13 19:27:43.017577 kubelet[2698]: I0413 19:27:43.017536 2698 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/d03b796e-0c68-476a-9581-3e48592959be-kube-proxy\") pod \"kube-proxy-8hqpp\" (UID: \"d03b796e-0c68-476a-9581-3e48592959be\") " pod="kube-system/kube-proxy-8hqpp" Apr 13 19:27:43.017794 kubelet[2698]: I0413 19:27:43.017725 2698 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d03b796e-0c68-476a-9581-3e48592959be-xtables-lock\") pod \"kube-proxy-8hqpp\" (UID: \"d03b796e-0c68-476a-9581-3e48592959be\") " pod="kube-system/kube-proxy-8hqpp" Apr 13 19:27:43.017794 kubelet[2698]: I0413 19:27:43.017748 2698 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d03b796e-0c68-476a-9581-3e48592959be-lib-modules\") pod \"kube-proxy-8hqpp\" (UID: \"d03b796e-0c68-476a-9581-3e48592959be\") " pod="kube-system/kube-proxy-8hqpp" Apr 13 19:27:43.017794 kubelet[2698]: I0413 19:27:43.017764 2698 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dht88\" (UniqueName: \"kubernetes.io/projected/d03b796e-0c68-476a-9581-3e48592959be-kube-api-access-dht88\") pod \"kube-proxy-8hqpp\" (UID: \"d03b796e-0c68-476a-9581-3e48592959be\") " pod="kube-system/kube-proxy-8hqpp" Apr 13 19:27:43.266958 containerd[1620]: time="2026-04-13T19:27:43.266872634Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-8hqpp,Uid:d03b796e-0c68-476a-9581-3e48592959be,Namespace:kube-system,Attempt:0,}" Apr 13 19:27:43.292639 containerd[1620]: time="2026-04-13T19:27:43.292469870Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 19:27:43.292639 containerd[1620]: time="2026-04-13T19:27:43.292520371Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 19:27:43.292639 containerd[1620]: time="2026-04-13T19:27:43.292531135Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:27:43.293291 containerd[1620]: time="2026-04-13T19:27:43.293124414Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:27:43.319992 kubelet[2698]: I0413 19:27:43.319566 2698 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mm7h7\" (UniqueName: \"kubernetes.io/projected/17d5dd83-7427-44d0-ae13-3ce8142328d1-kube-api-access-mm7h7\") pod \"tigera-operator-6bf85f8dd-bcj2k\" (UID: \"17d5dd83-7427-44d0-ae13-3ce8142328d1\") " pod="tigera-operator/tigera-operator-6bf85f8dd-bcj2k" Apr 13 19:27:43.319992 kubelet[2698]: I0413 19:27:43.319608 2698 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/17d5dd83-7427-44d0-ae13-3ce8142328d1-var-lib-calico\") pod \"tigera-operator-6bf85f8dd-bcj2k\" (UID: \"17d5dd83-7427-44d0-ae13-3ce8142328d1\") " pod="tigera-operator/tigera-operator-6bf85f8dd-bcj2k" Apr 13 19:27:43.340441 containerd[1620]: time="2026-04-13T19:27:43.339637361Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-8hqpp,Uid:d03b796e-0c68-476a-9581-3e48592959be,Namespace:kube-system,Attempt:0,} returns sandbox id \"e9fb6732dcfba0b71c98209a75ad1c7f0252e4fa77718f3691a8f2c3f5ac53b1\"" Apr 13 19:27:43.345789 containerd[1620]: time="2026-04-13T19:27:43.345747143Z" level=info msg="CreateContainer within sandbox \"e9fb6732dcfba0b71c98209a75ad1c7f0252e4fa77718f3691a8f2c3f5ac53b1\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Apr 13 19:27:43.361499 containerd[1620]: time="2026-04-13T19:27:43.361440068Z" level=info msg="CreateContainer within sandbox \"e9fb6732dcfba0b71c98209a75ad1c7f0252e4fa77718f3691a8f2c3f5ac53b1\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"7a1ec9af58793eea170d913a3117b20775eaeae471212bf2c1520bc77a7146bf\"" Apr 13 19:27:43.362710 containerd[1620]: time="2026-04-13T19:27:43.362481648Z" level=info msg="StartContainer for \"7a1ec9af58793eea170d913a3117b20775eaeae471212bf2c1520bc77a7146bf\"" Apr 13 19:27:43.419402 containerd[1620]: time="2026-04-13T19:27:43.419047806Z" level=info msg="StartContainer for \"7a1ec9af58793eea170d913a3117b20775eaeae471212bf2c1520bc77a7146bf\" returns successfully" Apr 13 19:27:43.537915 containerd[1620]: time="2026-04-13T19:27:43.537271255Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6bf85f8dd-bcj2k,Uid:17d5dd83-7427-44d0-ae13-3ce8142328d1,Namespace:tigera-operator,Attempt:0,}" Apr 13 19:27:43.571037 containerd[1620]: time="2026-04-13T19:27:43.570752549Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 19:27:43.571643 containerd[1620]: time="2026-04-13T19:27:43.571548510Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 19:27:43.571903 containerd[1620]: time="2026-04-13T19:27:43.571764877Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:27:43.572451 containerd[1620]: time="2026-04-13T19:27:43.572316099Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:27:43.639886 containerd[1620]: time="2026-04-13T19:27:43.639804539Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6bf85f8dd-bcj2k,Uid:17d5dd83-7427-44d0-ae13-3ce8142328d1,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"b67892528463124b0c131108094802685d27ebf2d808d2e0925916001ade6e72\"" Apr 13 19:27:43.642973 containerd[1620]: time="2026-04-13T19:27:43.641871412Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\"" Apr 13 19:27:44.182193 kubelet[2698]: I0413 19:27:44.181546 2698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-8hqpp" podStartSLOduration=2.181524572 podStartE2EDuration="2.181524572s" podCreationTimestamp="2026-04-13 19:27:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-13 19:27:44.181425094 +0000 UTC m=+6.226752104" watchObservedRunningTime="2026-04-13 19:27:44.181524572 +0000 UTC m=+6.226851502" Apr 13 19:27:45.234981 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2458535085.mount: Deactivated successfully. Apr 13 19:27:46.240653 containerd[1620]: time="2026-04-13T19:27:46.239671162Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.40.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:27:46.240653 containerd[1620]: time="2026-04-13T19:27:46.240603400Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.40.7: active requests=0, bytes read=25071565" Apr 13 19:27:46.241556 containerd[1620]: time="2026-04-13T19:27:46.241510109Z" level=info msg="ImageCreate event name:\"sha256:b2fef69c2456aa0a6f6dcb63425a69d11dc35a73b1883b250e4d92f5a697fefe\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:27:46.244352 containerd[1620]: time="2026-04-13T19:27:46.244057018Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:27:46.244818 containerd[1620]: time="2026-04-13T19:27:46.244789427Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.40.7\" with image id \"sha256:b2fef69c2456aa0a6f6dcb63425a69d11dc35a73b1883b250e4d92f5a697fefe\", repo tag \"quay.io/tigera/operator:v1.40.7\", repo digest \"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\", size \"25067560\" in 2.602884042s" Apr 13 19:27:46.244856 containerd[1620]: time="2026-04-13T19:27:46.244817917Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\" returns image reference \"sha256:b2fef69c2456aa0a6f6dcb63425a69d11dc35a73b1883b250e4d92f5a697fefe\"" Apr 13 19:27:46.249514 containerd[1620]: time="2026-04-13T19:27:46.249487669Z" level=info msg="CreateContainer within sandbox \"b67892528463124b0c131108094802685d27ebf2d808d2e0925916001ade6e72\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Apr 13 19:27:46.266602 containerd[1620]: time="2026-04-13T19:27:46.266517595Z" level=info msg="CreateContainer within sandbox \"b67892528463124b0c131108094802685d27ebf2d808d2e0925916001ade6e72\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"377e7831635617bd92f8a0062e35a82d3bddb7d95533bfae8e7241c6b6ad3270\"" Apr 13 19:27:46.268312 containerd[1620]: time="2026-04-13T19:27:46.268254828Z" level=info msg="StartContainer for \"377e7831635617bd92f8a0062e35a82d3bddb7d95533bfae8e7241c6b6ad3270\"" Apr 13 19:27:46.323191 containerd[1620]: time="2026-04-13T19:27:46.323054432Z" level=info msg="StartContainer for \"377e7831635617bd92f8a0062e35a82d3bddb7d95533bfae8e7241c6b6ad3270\" returns successfully" Apr 13 19:27:47.733524 kubelet[2698]: I0413 19:27:47.733391 2698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-6bf85f8dd-bcj2k" podStartSLOduration=2.128447052 podStartE2EDuration="4.733372811s" podCreationTimestamp="2026-04-13 19:27:43 +0000 UTC" firstStartedPulling="2026-04-13 19:27:43.640916627 +0000 UTC m=+5.686243557" lastFinishedPulling="2026-04-13 19:27:46.245842426 +0000 UTC m=+8.291169316" observedRunningTime="2026-04-13 19:27:47.195828662 +0000 UTC m=+9.241155592" watchObservedRunningTime="2026-04-13 19:27:47.733372811 +0000 UTC m=+9.778699701" Apr 13 19:27:52.351569 sudo[1825]: pam_unix(sudo:session): session closed for user root Apr 13 19:27:52.367236 sshd[1821]: pam_unix(sshd:session): session closed for user core Apr 13 19:27:52.381122 systemd[1]: sshd@6-49.13.63.18:22-50.85.169.122:52230.service: Deactivated successfully. Apr 13 19:27:52.389354 systemd[1]: session-7.scope: Deactivated successfully. Apr 13 19:27:52.391279 systemd-logind[1586]: Session 7 logged out. Waiting for processes to exit. Apr 13 19:27:52.392412 systemd-logind[1586]: Removed session 7. Apr 13 19:27:53.867047 update_engine[1588]: I20260413 19:27:53.866968 1588 update_attempter.cc:509] Updating boot flags... Apr 13 19:27:53.992960 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 33 scanned by (udev-worker) (3095) Apr 13 19:27:54.114970 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 33 scanned by (udev-worker) (3098) Apr 13 19:27:57.017636 kubelet[2698]: I0413 19:27:57.017456 2698 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/85fc6a41-dc71-478f-80e6-f5ae2a8984da-typha-certs\") pod \"calico-typha-599bb458d5-5nls5\" (UID: \"85fc6a41-dc71-478f-80e6-f5ae2a8984da\") " pod="calico-system/calico-typha-599bb458d5-5nls5" Apr 13 19:27:57.018645 kubelet[2698]: I0413 19:27:57.018413 2698 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c5rq9\" (UniqueName: \"kubernetes.io/projected/85fc6a41-dc71-478f-80e6-f5ae2a8984da-kube-api-access-c5rq9\") pod \"calico-typha-599bb458d5-5nls5\" (UID: \"85fc6a41-dc71-478f-80e6-f5ae2a8984da\") " pod="calico-system/calico-typha-599bb458d5-5nls5" Apr 13 19:27:57.018645 kubelet[2698]: I0413 19:27:57.018586 2698 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/85fc6a41-dc71-478f-80e6-f5ae2a8984da-tigera-ca-bundle\") pod \"calico-typha-599bb458d5-5nls5\" (UID: \"85fc6a41-dc71-478f-80e6-f5ae2a8984da\") " pod="calico-system/calico-typha-599bb458d5-5nls5" Apr 13 19:27:57.119477 kubelet[2698]: I0413 19:27:57.119423 2698 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/01b34782-5de7-4278-8bb5-e20af0415d00-xtables-lock\") pod \"calico-node-sdgtk\" (UID: \"01b34782-5de7-4278-8bb5-e20af0415d00\") " pod="calico-system/calico-node-sdgtk" Apr 13 19:27:57.119625 kubelet[2698]: I0413 19:27:57.119503 2698 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/01b34782-5de7-4278-8bb5-e20af0415d00-bpffs\") pod \"calico-node-sdgtk\" (UID: \"01b34782-5de7-4278-8bb5-e20af0415d00\") " pod="calico-system/calico-node-sdgtk" Apr 13 19:27:57.119625 kubelet[2698]: I0413 19:27:57.119541 2698 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/01b34782-5de7-4278-8bb5-e20af0415d00-cni-net-dir\") pod \"calico-node-sdgtk\" (UID: \"01b34782-5de7-4278-8bb5-e20af0415d00\") " pod="calico-system/calico-node-sdgtk" Apr 13 19:27:57.119625 kubelet[2698]: I0413 19:27:57.119561 2698 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/01b34782-5de7-4278-8bb5-e20af0415d00-policysync\") pod \"calico-node-sdgtk\" (UID: \"01b34782-5de7-4278-8bb5-e20af0415d00\") " pod="calico-system/calico-node-sdgtk" Apr 13 19:27:57.119625 kubelet[2698]: I0413 19:27:57.119580 2698 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/01b34782-5de7-4278-8bb5-e20af0415d00-cni-log-dir\") pod \"calico-node-sdgtk\" (UID: \"01b34782-5de7-4278-8bb5-e20af0415d00\") " pod="calico-system/calico-node-sdgtk" Apr 13 19:27:57.119625 kubelet[2698]: I0413 19:27:57.119596 2698 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/01b34782-5de7-4278-8bb5-e20af0415d00-tigera-ca-bundle\") pod \"calico-node-sdgtk\" (UID: \"01b34782-5de7-4278-8bb5-e20af0415d00\") " pod="calico-system/calico-node-sdgtk" Apr 13 19:27:57.119769 kubelet[2698]: I0413 19:27:57.119613 2698 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/01b34782-5de7-4278-8bb5-e20af0415d00-var-run-calico\") pod \"calico-node-sdgtk\" (UID: \"01b34782-5de7-4278-8bb5-e20af0415d00\") " pod="calico-system/calico-node-sdgtk" Apr 13 19:27:57.119769 kubelet[2698]: I0413 19:27:57.119630 2698 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/01b34782-5de7-4278-8bb5-e20af0415d00-node-certs\") pod \"calico-node-sdgtk\" (UID: \"01b34782-5de7-4278-8bb5-e20af0415d00\") " pod="calico-system/calico-node-sdgtk" Apr 13 19:27:57.119769 kubelet[2698]: I0413 19:27:57.119649 2698 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/01b34782-5de7-4278-8bb5-e20af0415d00-flexvol-driver-host\") pod \"calico-node-sdgtk\" (UID: \"01b34782-5de7-4278-8bb5-e20af0415d00\") " pod="calico-system/calico-node-sdgtk" Apr 13 19:27:57.119769 kubelet[2698]: I0413 19:27:57.119667 2698 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/01b34782-5de7-4278-8bb5-e20af0415d00-lib-modules\") pod \"calico-node-sdgtk\" (UID: \"01b34782-5de7-4278-8bb5-e20af0415d00\") " pod="calico-system/calico-node-sdgtk" Apr 13 19:27:57.119769 kubelet[2698]: I0413 19:27:57.119685 2698 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nodeproc\" (UniqueName: \"kubernetes.io/host-path/01b34782-5de7-4278-8bb5-e20af0415d00-nodeproc\") pod \"calico-node-sdgtk\" (UID: \"01b34782-5de7-4278-8bb5-e20af0415d00\") " pod="calico-system/calico-node-sdgtk" Apr 13 19:27:57.119897 kubelet[2698]: I0413 19:27:57.119702 2698 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-psrk4\" (UniqueName: \"kubernetes.io/projected/01b34782-5de7-4278-8bb5-e20af0415d00-kube-api-access-psrk4\") pod \"calico-node-sdgtk\" (UID: \"01b34782-5de7-4278-8bb5-e20af0415d00\") " pod="calico-system/calico-node-sdgtk" Apr 13 19:27:57.119897 kubelet[2698]: I0413 19:27:57.119721 2698 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/01b34782-5de7-4278-8bb5-e20af0415d00-cni-bin-dir\") pod \"calico-node-sdgtk\" (UID: \"01b34782-5de7-4278-8bb5-e20af0415d00\") " pod="calico-system/calico-node-sdgtk" Apr 13 19:27:57.119897 kubelet[2698]: I0413 19:27:57.119738 2698 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/01b34782-5de7-4278-8bb5-e20af0415d00-sys-fs\") pod \"calico-node-sdgtk\" (UID: \"01b34782-5de7-4278-8bb5-e20af0415d00\") " pod="calico-system/calico-node-sdgtk" Apr 13 19:27:57.119897 kubelet[2698]: I0413 19:27:57.119756 2698 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/01b34782-5de7-4278-8bb5-e20af0415d00-var-lib-calico\") pod \"calico-node-sdgtk\" (UID: \"01b34782-5de7-4278-8bb5-e20af0415d00\") " pod="calico-system/calico-node-sdgtk" Apr 13 19:27:57.196490 kubelet[2698]: E0413 19:27:57.196440 2698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xdt82" podUID="cb288a13-6f3a-4bf1-9058-ef9f893933ae" Apr 13 19:27:57.220876 kubelet[2698]: I0413 19:27:57.220732 2698 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/cb288a13-6f3a-4bf1-9058-ef9f893933ae-registration-dir\") pod \"csi-node-driver-xdt82\" (UID: \"cb288a13-6f3a-4bf1-9058-ef9f893933ae\") " pod="calico-system/csi-node-driver-xdt82" Apr 13 19:27:57.220876 kubelet[2698]: I0413 19:27:57.220826 2698 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h8mg4\" (UniqueName: \"kubernetes.io/projected/cb288a13-6f3a-4bf1-9058-ef9f893933ae-kube-api-access-h8mg4\") pod \"csi-node-driver-xdt82\" (UID: \"cb288a13-6f3a-4bf1-9058-ef9f893933ae\") " pod="calico-system/csi-node-driver-xdt82" Apr 13 19:27:57.222208 kubelet[2698]: I0413 19:27:57.221704 2698 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cb288a13-6f3a-4bf1-9058-ef9f893933ae-kubelet-dir\") pod \"csi-node-driver-xdt82\" (UID: \"cb288a13-6f3a-4bf1-9058-ef9f893933ae\") " pod="calico-system/csi-node-driver-xdt82" Apr 13 19:27:57.222208 kubelet[2698]: I0413 19:27:57.221775 2698 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/cb288a13-6f3a-4bf1-9058-ef9f893933ae-varrun\") pod \"csi-node-driver-xdt82\" (UID: \"cb288a13-6f3a-4bf1-9058-ef9f893933ae\") " pod="calico-system/csi-node-driver-xdt82" Apr 13 19:27:57.222208 kubelet[2698]: I0413 19:27:57.221803 2698 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/cb288a13-6f3a-4bf1-9058-ef9f893933ae-socket-dir\") pod \"csi-node-driver-xdt82\" (UID: \"cb288a13-6f3a-4bf1-9058-ef9f893933ae\") " pod="calico-system/csi-node-driver-xdt82" Apr 13 19:27:57.226542 kubelet[2698]: E0413 19:27:57.226520 2698 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:27:57.226799 kubelet[2698]: W0413 19:27:57.226596 2698 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:27:57.226799 kubelet[2698]: E0413 19:27:57.226621 2698 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:27:57.234662 kubelet[2698]: E0413 19:27:57.234637 2698 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:27:57.234849 kubelet[2698]: W0413 19:27:57.234787 2698 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:27:57.234849 kubelet[2698]: E0413 19:27:57.234814 2698 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:27:57.243737 containerd[1620]: time="2026-04-13T19:27:57.243634323Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-599bb458d5-5nls5,Uid:85fc6a41-dc71-478f-80e6-f5ae2a8984da,Namespace:calico-system,Attempt:0,}" Apr 13 19:27:57.279162 kubelet[2698]: E0413 19:27:57.278540 2698 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:27:57.279162 kubelet[2698]: W0413 19:27:57.278732 2698 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:27:57.279162 kubelet[2698]: E0413 19:27:57.278754 2698 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:27:57.298085 containerd[1620]: time="2026-04-13T19:27:57.297696087Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 19:27:57.298085 containerd[1620]: time="2026-04-13T19:27:57.297759579Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 19:27:57.298085 containerd[1620]: time="2026-04-13T19:27:57.297774902Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:27:57.298085 containerd[1620]: time="2026-04-13T19:27:57.297868600Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:27:57.325821 kubelet[2698]: E0413 19:27:57.325774 2698 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:27:57.325821 kubelet[2698]: W0413 19:27:57.325797 2698 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:27:57.325821 kubelet[2698]: E0413 19:27:57.325825 2698 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:27:57.327137 kubelet[2698]: E0413 19:27:57.326383 2698 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:27:57.327137 kubelet[2698]: W0413 19:27:57.326422 2698 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:27:57.327137 kubelet[2698]: E0413 19:27:57.326436 2698 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:27:57.327137 kubelet[2698]: E0413 19:27:57.326708 2698 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:27:57.327137 kubelet[2698]: W0413 19:27:57.326719 2698 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:27:57.327137 kubelet[2698]: E0413 19:27:57.326728 2698 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:27:57.327137 kubelet[2698]: E0413 19:27:57.326941 2698 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:27:57.327137 kubelet[2698]: W0413 19:27:57.326950 2698 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:27:57.327137 kubelet[2698]: E0413 19:27:57.326959 2698 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:27:57.329803 kubelet[2698]: E0413 19:27:57.327195 2698 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:27:57.329803 kubelet[2698]: W0413 19:27:57.327205 2698 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:27:57.329803 kubelet[2698]: E0413 19:27:57.327216 2698 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:27:57.329803 kubelet[2698]: E0413 19:27:57.327496 2698 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:27:57.329803 kubelet[2698]: W0413 19:27:57.327506 2698 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:27:57.329803 kubelet[2698]: E0413 19:27:57.327516 2698 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:27:57.329803 kubelet[2698]: E0413 19:27:57.327707 2698 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:27:57.329803 kubelet[2698]: W0413 19:27:57.327717 2698 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:27:57.329803 kubelet[2698]: E0413 19:27:57.327725 2698 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:27:57.329803 kubelet[2698]: E0413 19:27:57.327898 2698 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:27:57.330143 kubelet[2698]: W0413 19:27:57.327909 2698 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:27:57.330143 kubelet[2698]: E0413 19:27:57.327916 2698 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:27:57.330143 kubelet[2698]: E0413 19:27:57.328156 2698 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:27:57.330143 kubelet[2698]: W0413 19:27:57.328167 2698 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:27:57.330143 kubelet[2698]: E0413 19:27:57.328204 2698 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:27:57.330143 kubelet[2698]: E0413 19:27:57.328372 2698 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:27:57.330143 kubelet[2698]: W0413 19:27:57.328380 2698 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:27:57.330143 kubelet[2698]: E0413 19:27:57.328388 2698 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:27:57.330143 kubelet[2698]: E0413 19:27:57.328542 2698 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:27:57.330143 kubelet[2698]: W0413 19:27:57.328549 2698 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:27:57.330436 kubelet[2698]: E0413 19:27:57.328557 2698 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:27:57.330436 kubelet[2698]: E0413 19:27:57.328713 2698 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:27:57.330436 kubelet[2698]: W0413 19:27:57.328721 2698 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:27:57.330436 kubelet[2698]: E0413 19:27:57.328729 2698 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:27:57.330436 kubelet[2698]: E0413 19:27:57.329001 2698 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:27:57.330436 kubelet[2698]: W0413 19:27:57.329013 2698 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:27:57.330436 kubelet[2698]: E0413 19:27:57.329022 2698 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:27:57.330436 kubelet[2698]: E0413 19:27:57.329242 2698 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:27:57.330436 kubelet[2698]: W0413 19:27:57.329251 2698 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:27:57.330436 kubelet[2698]: E0413 19:27:57.329260 2698 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:27:57.330666 kubelet[2698]: E0413 19:27:57.329461 2698 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:27:57.330666 kubelet[2698]: W0413 19:27:57.329469 2698 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:27:57.330666 kubelet[2698]: E0413 19:27:57.329477 2698 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:27:57.330666 kubelet[2698]: E0413 19:27:57.329672 2698 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:27:57.330666 kubelet[2698]: W0413 19:27:57.329684 2698 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:27:57.330666 kubelet[2698]: E0413 19:27:57.329700 2698 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:27:57.330666 kubelet[2698]: E0413 19:27:57.329895 2698 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:27:57.330666 kubelet[2698]: W0413 19:27:57.329905 2698 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:27:57.330666 kubelet[2698]: E0413 19:27:57.329915 2698 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:27:57.330666 kubelet[2698]: E0413 19:27:57.330123 2698 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:27:57.330912 kubelet[2698]: W0413 19:27:57.330135 2698 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:27:57.330912 kubelet[2698]: E0413 19:27:57.330143 2698 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:27:57.330912 kubelet[2698]: E0413 19:27:57.330386 2698 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:27:57.330912 kubelet[2698]: W0413 19:27:57.330396 2698 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:27:57.330912 kubelet[2698]: E0413 19:27:57.330404 2698 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:27:57.330912 kubelet[2698]: E0413 19:27:57.330612 2698 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:27:57.330912 kubelet[2698]: W0413 19:27:57.330622 2698 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:27:57.330912 kubelet[2698]: E0413 19:27:57.330631 2698 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:27:57.330912 kubelet[2698]: E0413 19:27:57.330800 2698 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:27:57.330912 kubelet[2698]: W0413 19:27:57.330808 2698 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:27:57.331178 kubelet[2698]: E0413 19:27:57.330815 2698 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:27:57.331178 kubelet[2698]: E0413 19:27:57.330974 2698 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:27:57.331178 kubelet[2698]: W0413 19:27:57.330985 2698 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:27:57.331178 kubelet[2698]: E0413 19:27:57.330992 2698 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:27:57.331273 kubelet[2698]: E0413 19:27:57.331211 2698 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:27:57.331273 kubelet[2698]: W0413 19:27:57.331221 2698 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:27:57.331273 kubelet[2698]: E0413 19:27:57.331230 2698 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:27:57.331821 kubelet[2698]: E0413 19:27:57.331783 2698 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:27:57.331821 kubelet[2698]: W0413 19:27:57.331798 2698 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:27:57.333269 kubelet[2698]: E0413 19:27:57.333095 2698 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:27:57.333419 kubelet[2698]: E0413 19:27:57.333406 2698 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:27:57.333484 kubelet[2698]: W0413 19:27:57.333473 2698 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:27:57.333663 kubelet[2698]: E0413 19:27:57.333648 2698 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:27:57.345664 kubelet[2698]: E0413 19:27:57.345588 2698 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:27:57.345664 kubelet[2698]: W0413 19:27:57.345608 2698 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:27:57.345664 kubelet[2698]: E0413 19:27:57.345627 2698 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:27:57.359156 containerd[1620]: time="2026-04-13T19:27:57.359082265Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-599bb458d5-5nls5,Uid:85fc6a41-dc71-478f-80e6-f5ae2a8984da,Namespace:calico-system,Attempt:0,} returns sandbox id \"9b8b67a1cb2882112209cf824a89ab84fd86e0d7752cb0f4ab6f6fde694237e7\"" Apr 13 19:27:57.361136 containerd[1620]: time="2026-04-13T19:27:57.361084252Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\"" Apr 13 19:27:57.378647 containerd[1620]: time="2026-04-13T19:27:57.378578752Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-sdgtk,Uid:01b34782-5de7-4278-8bb5-e20af0415d00,Namespace:calico-system,Attempt:0,}" Apr 13 19:27:57.405462 containerd[1620]: time="2026-04-13T19:27:57.405361405Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 19:27:57.405649 containerd[1620]: time="2026-04-13T19:27:57.405471067Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 19:27:57.405649 containerd[1620]: time="2026-04-13T19:27:57.405509674Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:27:57.405828 containerd[1620]: time="2026-04-13T19:27:57.405642100Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:27:57.442163 containerd[1620]: time="2026-04-13T19:27:57.442102903Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-sdgtk,Uid:01b34782-5de7-4278-8bb5-e20af0415d00,Namespace:calico-system,Attempt:0,} returns sandbox id \"e0bd23d9111fc750cb19c475c70669204ea49dc58f0187cd1d8f4839b6522e67\"" Apr 13 19:27:58.978538 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1013973113.mount: Deactivated successfully. Apr 13 19:27:59.104580 kubelet[2698]: E0413 19:27:59.104503 2698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xdt82" podUID="cb288a13-6f3a-4bf1-9058-ef9f893933ae" Apr 13 19:27:59.764570 containerd[1620]: time="2026-04-13T19:27:59.764481575Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:27:59.766225 containerd[1620]: time="2026-04-13T19:27:59.766180153Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.31.4: active requests=0, bytes read=33865174" Apr 13 19:27:59.767311 containerd[1620]: time="2026-04-13T19:27:59.767260503Z" level=info msg="ImageCreate event name:\"sha256:e836e1dea560d4c477b347f1c93c245aec618361306b23eda1d6bb7665476182\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:27:59.770806 containerd[1620]: time="2026-04-13T19:27:59.770752718Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:27:59.771807 containerd[1620]: time="2026-04-13T19:27:59.771278650Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.31.4\" with image id \"sha256:e836e1dea560d4c477b347f1c93c245aec618361306b23eda1d6bb7665476182\", repo tag \"ghcr.io/flatcar/calico/typha:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\", size \"33865028\" in 2.410161472s" Apr 13 19:27:59.771807 containerd[1620]: time="2026-04-13T19:27:59.771307655Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\" returns image reference \"sha256:e836e1dea560d4c477b347f1c93c245aec618361306b23eda1d6bb7665476182\"" Apr 13 19:27:59.775060 containerd[1620]: time="2026-04-13T19:27:59.775016267Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\"" Apr 13 19:27:59.792242 containerd[1620]: time="2026-04-13T19:27:59.792202330Z" level=info msg="CreateContainer within sandbox \"9b8b67a1cb2882112209cf824a89ab84fd86e0d7752cb0f4ab6f6fde694237e7\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Apr 13 19:27:59.810863 containerd[1620]: time="2026-04-13T19:27:59.810675139Z" level=info msg="CreateContainer within sandbox \"9b8b67a1cb2882112209cf824a89ab84fd86e0d7752cb0f4ab6f6fde694237e7\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"b42e30e4b108a8e71407790dafb1e91725f723009cdfd809dcf5bafa8c60cb07\"" Apr 13 19:27:59.812766 containerd[1620]: time="2026-04-13T19:27:59.812717458Z" level=info msg="StartContainer for \"b42e30e4b108a8e71407790dafb1e91725f723009cdfd809dcf5bafa8c60cb07\"" Apr 13 19:27:59.877440 containerd[1620]: time="2026-04-13T19:27:59.877030888Z" level=info msg="StartContainer for \"b42e30e4b108a8e71407790dafb1e91725f723009cdfd809dcf5bafa8c60cb07\" returns successfully" Apr 13 19:28:00.219974 kubelet[2698]: E0413 19:28:00.218432 2698 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:28:00.219974 kubelet[2698]: W0413 19:28:00.218452 2698 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:28:00.219974 kubelet[2698]: E0413 19:28:00.218470 2698 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:28:00.223042 kubelet[2698]: E0413 19:28:00.222516 2698 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:28:00.223042 kubelet[2698]: W0413 19:28:00.222538 2698 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:28:00.223042 kubelet[2698]: E0413 19:28:00.222603 2698 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:28:00.224068 kubelet[2698]: E0413 19:28:00.224044 2698 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:28:00.224068 kubelet[2698]: W0413 19:28:00.224064 2698 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:28:00.224228 kubelet[2698]: E0413 19:28:00.224080 2698 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:28:00.226125 kubelet[2698]: E0413 19:28:00.226104 2698 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:28:00.226125 kubelet[2698]: W0413 19:28:00.226123 2698 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:28:00.226268 kubelet[2698]: E0413 19:28:00.226137 2698 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:28:00.227146 kubelet[2698]: E0413 19:28:00.227125 2698 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:28:00.227146 kubelet[2698]: W0413 19:28:00.227141 2698 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:28:00.227146 kubelet[2698]: E0413 19:28:00.227153 2698 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:28:00.227332 kubelet[2698]: E0413 19:28:00.227319 2698 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:28:00.227332 kubelet[2698]: W0413 19:28:00.227329 2698 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:28:00.227383 kubelet[2698]: E0413 19:28:00.227340 2698 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:28:00.229328 kubelet[2698]: E0413 19:28:00.229306 2698 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:28:00.229328 kubelet[2698]: W0413 19:28:00.229322 2698 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:28:00.229328 kubelet[2698]: E0413 19:28:00.229334 2698 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:28:00.229520 kubelet[2698]: E0413 19:28:00.229508 2698 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:28:00.229520 kubelet[2698]: W0413 19:28:00.229519 2698 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:28:00.229617 kubelet[2698]: E0413 19:28:00.229527 2698 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:28:00.229870 kubelet[2698]: E0413 19:28:00.229846 2698 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:28:00.229870 kubelet[2698]: W0413 19:28:00.229860 2698 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:28:00.229870 kubelet[2698]: E0413 19:28:00.229871 2698 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:28:00.230050 kubelet[2698]: E0413 19:28:00.230035 2698 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:28:00.230050 kubelet[2698]: W0413 19:28:00.230047 2698 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:28:00.230108 kubelet[2698]: E0413 19:28:00.230057 2698 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:28:00.232085 kubelet[2698]: E0413 19:28:00.232060 2698 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:28:00.232085 kubelet[2698]: W0413 19:28:00.232078 2698 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:28:00.232085 kubelet[2698]: E0413 19:28:00.232091 2698 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:28:00.233977 kubelet[2698]: E0413 19:28:00.232296 2698 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:28:00.233977 kubelet[2698]: W0413 19:28:00.232310 2698 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:28:00.233977 kubelet[2698]: E0413 19:28:00.232320 2698 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:28:00.233977 kubelet[2698]: E0413 19:28:00.232457 2698 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:28:00.233977 kubelet[2698]: W0413 19:28:00.232463 2698 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:28:00.233977 kubelet[2698]: E0413 19:28:00.232471 2698 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:28:00.233977 kubelet[2698]: E0413 19:28:00.232646 2698 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:28:00.233977 kubelet[2698]: W0413 19:28:00.232654 2698 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:28:00.233977 kubelet[2698]: E0413 19:28:00.232662 2698 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:28:00.233977 kubelet[2698]: E0413 19:28:00.232791 2698 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:28:00.234262 kubelet[2698]: W0413 19:28:00.232798 2698 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:28:00.234262 kubelet[2698]: E0413 19:28:00.232805 2698 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:28:00.245237 kubelet[2698]: E0413 19:28:00.245088 2698 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:28:00.245237 kubelet[2698]: W0413 19:28:00.245112 2698 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:28:00.245237 kubelet[2698]: E0413 19:28:00.245131 2698 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:28:00.247132 kubelet[2698]: E0413 19:28:00.246036 2698 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:28:00.247344 kubelet[2698]: W0413 19:28:00.247254 2698 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:28:00.247344 kubelet[2698]: E0413 19:28:00.247285 2698 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:28:00.250105 kubelet[2698]: E0413 19:28:00.250068 2698 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:28:00.250105 kubelet[2698]: W0413 19:28:00.250096 2698 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:28:00.250330 kubelet[2698]: E0413 19:28:00.250118 2698 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:28:00.251120 kubelet[2698]: E0413 19:28:00.251086 2698 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:28:00.251120 kubelet[2698]: W0413 19:28:00.251109 2698 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:28:00.251120 kubelet[2698]: E0413 19:28:00.251125 2698 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:28:00.252164 kubelet[2698]: E0413 19:28:00.251682 2698 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:28:00.252164 kubelet[2698]: W0413 19:28:00.251705 2698 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:28:00.252164 kubelet[2698]: E0413 19:28:00.251717 2698 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:28:00.252164 kubelet[2698]: E0413 19:28:00.252111 2698 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:28:00.252164 kubelet[2698]: W0413 19:28:00.252122 2698 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:28:00.252164 kubelet[2698]: E0413 19:28:00.252132 2698 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:28:00.254015 kubelet[2698]: E0413 19:28:00.252621 2698 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:28:00.254015 kubelet[2698]: W0413 19:28:00.252678 2698 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:28:00.254015 kubelet[2698]: E0413 19:28:00.252690 2698 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:28:00.254668 kubelet[2698]: E0413 19:28:00.254642 2698 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:28:00.254668 kubelet[2698]: W0413 19:28:00.254663 2698 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:28:00.254763 kubelet[2698]: E0413 19:28:00.254678 2698 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:28:00.254960 kubelet[2698]: E0413 19:28:00.254945 2698 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:28:00.254960 kubelet[2698]: W0413 19:28:00.254958 2698 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:28:00.255037 kubelet[2698]: E0413 19:28:00.254968 2698 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:28:00.256254 kubelet[2698]: E0413 19:28:00.256231 2698 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:28:00.256254 kubelet[2698]: W0413 19:28:00.256249 2698 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:28:00.256360 kubelet[2698]: E0413 19:28:00.256266 2698 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:28:00.256985 kubelet[2698]: E0413 19:28:00.256446 2698 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:28:00.256985 kubelet[2698]: W0413 19:28:00.256460 2698 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:28:00.256985 kubelet[2698]: E0413 19:28:00.256469 2698 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:28:00.258294 kubelet[2698]: E0413 19:28:00.258270 2698 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:28:00.258294 kubelet[2698]: W0413 19:28:00.258291 2698 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:28:00.258395 kubelet[2698]: E0413 19:28:00.258304 2698 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:28:00.258973 kubelet[2698]: E0413 19:28:00.258484 2698 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:28:00.258973 kubelet[2698]: W0413 19:28:00.258494 2698 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:28:00.258973 kubelet[2698]: E0413 19:28:00.258503 2698 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:28:00.261236 kubelet[2698]: E0413 19:28:00.261210 2698 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:28:00.261236 kubelet[2698]: W0413 19:28:00.261230 2698 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:28:00.261352 kubelet[2698]: E0413 19:28:00.261247 2698 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:28:00.262062 kubelet[2698]: E0413 19:28:00.261413 2698 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:28:00.262062 kubelet[2698]: W0413 19:28:00.261427 2698 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:28:00.262062 kubelet[2698]: E0413 19:28:00.261435 2698 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:28:00.262267 kubelet[2698]: E0413 19:28:00.262249 2698 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:28:00.262267 kubelet[2698]: W0413 19:28:00.262265 2698 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:28:00.262327 kubelet[2698]: E0413 19:28:00.262281 2698 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:28:00.262491 kubelet[2698]: E0413 19:28:00.262477 2698 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:28:00.262491 kubelet[2698]: W0413 19:28:00.262489 2698 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:28:00.262585 kubelet[2698]: E0413 19:28:00.262498 2698 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:28:00.264684 kubelet[2698]: E0413 19:28:00.264660 2698 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:28:00.264684 kubelet[2698]: W0413 19:28:00.264682 2698 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:28:00.264784 kubelet[2698]: E0413 19:28:00.264697 2698 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:28:01.104406 kubelet[2698]: E0413 19:28:01.104294 2698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xdt82" podUID="cb288a13-6f3a-4bf1-9058-ef9f893933ae" Apr 13 19:28:01.217790 kubelet[2698]: I0413 19:28:01.216625 2698 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 13 19:28:01.240580 kubelet[2698]: E0413 19:28:01.240520 2698 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:28:01.240580 kubelet[2698]: W0413 19:28:01.240547 2698 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:28:01.242113 kubelet[2698]: E0413 19:28:01.241384 2698 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:28:01.242113 kubelet[2698]: E0413 19:28:01.241818 2698 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:28:01.242113 kubelet[2698]: W0413 19:28:01.241893 2698 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:28:01.242113 kubelet[2698]: E0413 19:28:01.241907 2698 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:28:01.242113 kubelet[2698]: E0413 19:28:01.242174 2698 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:28:01.242113 kubelet[2698]: W0413 19:28:01.242241 2698 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:28:01.242113 kubelet[2698]: E0413 19:28:01.242257 2698 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:28:01.243111 kubelet[2698]: E0413 19:28:01.242923 2698 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:28:01.243111 kubelet[2698]: W0413 19:28:01.242957 2698 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:28:01.243111 kubelet[2698]: E0413 19:28:01.242970 2698 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:28:01.243767 kubelet[2698]: E0413 19:28:01.243752 2698 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:28:01.243931 kubelet[2698]: W0413 19:28:01.243885 2698 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:28:01.243931 kubelet[2698]: E0413 19:28:01.243902 2698 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:28:01.244478 kubelet[2698]: E0413 19:28:01.244379 2698 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:28:01.244478 kubelet[2698]: W0413 19:28:01.244393 2698 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:28:01.244478 kubelet[2698]: E0413 19:28:01.244404 2698 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:28:01.245284 kubelet[2698]: E0413 19:28:01.245175 2698 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:28:01.245284 kubelet[2698]: W0413 19:28:01.245188 2698 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:28:01.245284 kubelet[2698]: E0413 19:28:01.245199 2698 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:28:01.245810 kubelet[2698]: E0413 19:28:01.245796 2698 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:28:01.245998 kubelet[2698]: W0413 19:28:01.245881 2698 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:28:01.245998 kubelet[2698]: E0413 19:28:01.245898 2698 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:28:01.246470 kubelet[2698]: E0413 19:28:01.246368 2698 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:28:01.246470 kubelet[2698]: W0413 19:28:01.246380 2698 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:28:01.246470 kubelet[2698]: E0413 19:28:01.246391 2698 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:28:01.246749 kubelet[2698]: E0413 19:28:01.246662 2698 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:28:01.246749 kubelet[2698]: W0413 19:28:01.246684 2698 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:28:01.246749 kubelet[2698]: E0413 19:28:01.246694 2698 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:28:01.247258 kubelet[2698]: E0413 19:28:01.247177 2698 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:28:01.247258 kubelet[2698]: W0413 19:28:01.247191 2698 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:28:01.247258 kubelet[2698]: E0413 19:28:01.247209 2698 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:28:01.247749 kubelet[2698]: E0413 19:28:01.247625 2698 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:28:01.247749 kubelet[2698]: W0413 19:28:01.247638 2698 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:28:01.247749 kubelet[2698]: E0413 19:28:01.247648 2698 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:28:01.248326 kubelet[2698]: E0413 19:28:01.248260 2698 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:28:01.248326 kubelet[2698]: W0413 19:28:01.248273 2698 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:28:01.248326 kubelet[2698]: E0413 19:28:01.248284 2698 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:28:01.248910 kubelet[2698]: E0413 19:28:01.248717 2698 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:28:01.248910 kubelet[2698]: W0413 19:28:01.248787 2698 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:28:01.248910 kubelet[2698]: E0413 19:28:01.248799 2698 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:28:01.249411 kubelet[2698]: E0413 19:28:01.249285 2698 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:28:01.249411 kubelet[2698]: W0413 19:28:01.249298 2698 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:28:01.249411 kubelet[2698]: E0413 19:28:01.249309 2698 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:28:01.260341 kubelet[2698]: E0413 19:28:01.260292 2698 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:28:01.260341 kubelet[2698]: W0413 19:28:01.260320 2698 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:28:01.260341 kubelet[2698]: E0413 19:28:01.260338 2698 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:28:01.261652 kubelet[2698]: E0413 19:28:01.261621 2698 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:28:01.261652 kubelet[2698]: W0413 19:28:01.261641 2698 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:28:01.261794 kubelet[2698]: E0413 19:28:01.261658 2698 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:28:01.263145 kubelet[2698]: E0413 19:28:01.263119 2698 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:28:01.263145 kubelet[2698]: W0413 19:28:01.263139 2698 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:28:01.263267 kubelet[2698]: E0413 19:28:01.263156 2698 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:28:01.263599 kubelet[2698]: E0413 19:28:01.263575 2698 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:28:01.263599 kubelet[2698]: W0413 19:28:01.263591 2698 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:28:01.263683 kubelet[2698]: E0413 19:28:01.263609 2698 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:28:01.264164 kubelet[2698]: E0413 19:28:01.264141 2698 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:28:01.264164 kubelet[2698]: W0413 19:28:01.264160 2698 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:28:01.264402 kubelet[2698]: E0413 19:28:01.264173 2698 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:28:01.265017 kubelet[2698]: E0413 19:28:01.264998 2698 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:28:01.265017 kubelet[2698]: W0413 19:28:01.265015 2698 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:28:01.265124 kubelet[2698]: E0413 19:28:01.265027 2698 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:28:01.265362 kubelet[2698]: E0413 19:28:01.265344 2698 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:28:01.265362 kubelet[2698]: W0413 19:28:01.265360 2698 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:28:01.265457 kubelet[2698]: E0413 19:28:01.265371 2698 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:28:01.266128 kubelet[2698]: E0413 19:28:01.266106 2698 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:28:01.266128 kubelet[2698]: W0413 19:28:01.266125 2698 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:28:01.266290 kubelet[2698]: E0413 19:28:01.266137 2698 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:28:01.266647 kubelet[2698]: E0413 19:28:01.266629 2698 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:28:01.266647 kubelet[2698]: W0413 19:28:01.266646 2698 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:28:01.266744 kubelet[2698]: E0413 19:28:01.266658 2698 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:28:01.267154 kubelet[2698]: E0413 19:28:01.267135 2698 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:28:01.267154 kubelet[2698]: W0413 19:28:01.267152 2698 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:28:01.267230 kubelet[2698]: E0413 19:28:01.267164 2698 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:28:01.268061 kubelet[2698]: E0413 19:28:01.268038 2698 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:28:01.268061 kubelet[2698]: W0413 19:28:01.268058 2698 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:28:01.268165 kubelet[2698]: E0413 19:28:01.268083 2698 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:28:01.268459 kubelet[2698]: E0413 19:28:01.268428 2698 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:28:01.268512 kubelet[2698]: W0413 19:28:01.268465 2698 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:28:01.268512 kubelet[2698]: E0413 19:28:01.268488 2698 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:28:01.269524 kubelet[2698]: E0413 19:28:01.269502 2698 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:28:01.269524 kubelet[2698]: W0413 19:28:01.269521 2698 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:28:01.269626 kubelet[2698]: E0413 19:28:01.269534 2698 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:28:01.270345 kubelet[2698]: E0413 19:28:01.270314 2698 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:28:01.270345 kubelet[2698]: W0413 19:28:01.270334 2698 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:28:01.270345 kubelet[2698]: E0413 19:28:01.270348 2698 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:28:01.270883 kubelet[2698]: E0413 19:28:01.270812 2698 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:28:01.270883 kubelet[2698]: W0413 19:28:01.270827 2698 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:28:01.270883 kubelet[2698]: E0413 19:28:01.270837 2698 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:28:01.271848 kubelet[2698]: E0413 19:28:01.271787 2698 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:28:01.271848 kubelet[2698]: W0413 19:28:01.271805 2698 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:28:01.271848 kubelet[2698]: E0413 19:28:01.271817 2698 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:28:01.272424 kubelet[2698]: E0413 19:28:01.272289 2698 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:28:01.272424 kubelet[2698]: W0413 19:28:01.272309 2698 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:28:01.272424 kubelet[2698]: E0413 19:28:01.272326 2698 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:28:01.272556 kubelet[2698]: E0413 19:28:01.272537 2698 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:28:01.272556 kubelet[2698]: W0413 19:28:01.272547 2698 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:28:01.272556 kubelet[2698]: E0413 19:28:01.272556 2698 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:28:01.303439 containerd[1620]: time="2026-04-13T19:28:01.303180528Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:28:01.305598 containerd[1620]: time="2026-04-13T19:28:01.305562310Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4: active requests=0, bytes read=4457682" Apr 13 19:28:01.307262 containerd[1620]: time="2026-04-13T19:28:01.307189252Z" level=info msg="ImageCreate event name:\"sha256:449a6463eaa02e13b190ef7c4057191febcc65ab9418bae3bc0995f5bce65798\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:28:01.310143 containerd[1620]: time="2026-04-13T19:28:01.310038189Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:28:01.311186 containerd[1620]: time="2026-04-13T19:28:01.311067035Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" with image id \"sha256:449a6463eaa02e13b190ef7c4057191febcc65ab9418bae3bc0995f5bce65798\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\", size \"5855167\" in 1.536013441s" Apr 13 19:28:01.311186 containerd[1620]: time="2026-04-13T19:28:01.311113762Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" returns image reference \"sha256:449a6463eaa02e13b190ef7c4057191febcc65ab9418bae3bc0995f5bce65798\"" Apr 13 19:28:01.319369 containerd[1620]: time="2026-04-13T19:28:01.319243708Z" level=info msg="CreateContainer within sandbox \"e0bd23d9111fc750cb19c475c70669204ea49dc58f0187cd1d8f4839b6522e67\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Apr 13 19:28:01.336288 containerd[1620]: time="2026-04-13T19:28:01.336226796Z" level=info msg="CreateContainer within sandbox \"e0bd23d9111fc750cb19c475c70669204ea49dc58f0187cd1d8f4839b6522e67\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"ca92e3b9027d18a9833edeb912eba9dd245782037762e60f65789419becdb403\"" Apr 13 19:28:01.338258 containerd[1620]: time="2026-04-13T19:28:01.338215156Z" level=info msg="StartContainer for \"ca92e3b9027d18a9833edeb912eba9dd245782037762e60f65789419becdb403\"" Apr 13 19:28:01.371710 systemd[1]: run-containerd-runc-k8s.io-ca92e3b9027d18a9833edeb912eba9dd245782037762e60f65789419becdb403-runc.h4SbrD.mount: Deactivated successfully. Apr 13 19:28:01.410879 containerd[1620]: time="2026-04-13T19:28:01.410826061Z" level=info msg="StartContainer for \"ca92e3b9027d18a9833edeb912eba9dd245782037762e60f65789419becdb403\" returns successfully" Apr 13 19:28:01.567417 containerd[1620]: time="2026-04-13T19:28:01.567337044Z" level=info msg="shim disconnected" id=ca92e3b9027d18a9833edeb912eba9dd245782037762e60f65789419becdb403 namespace=k8s.io Apr 13 19:28:01.567417 containerd[1620]: time="2026-04-13T19:28:01.567406175Z" level=warning msg="cleaning up after shim disconnected" id=ca92e3b9027d18a9833edeb912eba9dd245782037762e60f65789419becdb403 namespace=k8s.io Apr 13 19:28:01.567417 containerd[1620]: time="2026-04-13T19:28:01.567414897Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 13 19:28:02.222412 containerd[1620]: time="2026-04-13T19:28:02.221836265Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\"" Apr 13 19:28:02.245150 kubelet[2698]: I0413 19:28:02.244873 2698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-599bb458d5-5nls5" podStartSLOduration=3.831394542 podStartE2EDuration="6.244836841s" podCreationTimestamp="2026-04-13 19:27:56 +0000 UTC" firstStartedPulling="2026-04-13 19:27:57.360513142 +0000 UTC m=+19.405840072" lastFinishedPulling="2026-04-13 19:27:59.773955361 +0000 UTC m=+21.819282371" observedRunningTime="2026-04-13 19:28:00.250477285 +0000 UTC m=+22.295804215" watchObservedRunningTime="2026-04-13 19:28:02.244836841 +0000 UTC m=+24.290163811" Apr 13 19:28:02.331275 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ca92e3b9027d18a9833edeb912eba9dd245782037762e60f65789419becdb403-rootfs.mount: Deactivated successfully. Apr 13 19:28:02.895519 kubelet[2698]: I0413 19:28:02.894754 2698 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 13 19:28:03.105202 kubelet[2698]: E0413 19:28:03.105104 2698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xdt82" podUID="cb288a13-6f3a-4bf1-9058-ef9f893933ae" Apr 13 19:28:05.105777 kubelet[2698]: E0413 19:28:05.105299 2698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xdt82" podUID="cb288a13-6f3a-4bf1-9058-ef9f893933ae" Apr 13 19:28:06.776531 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1635917567.mount: Deactivated successfully. Apr 13 19:28:06.804086 containerd[1620]: time="2026-04-13T19:28:06.803056444Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:28:06.806109 containerd[1620]: time="2026-04-13T19:28:06.806049874Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.31.4: active requests=0, bytes read=153921674" Apr 13 19:28:06.807963 containerd[1620]: time="2026-04-13T19:28:06.807236948Z" level=info msg="ImageCreate event name:\"sha256:27be54f2b9e47d96c7e9e5ad16e26ec298c1829f31885c81a622d50472c8ac97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:28:06.809778 containerd[1620]: time="2026-04-13T19:28:06.809728793Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:28:06.810635 containerd[1620]: time="2026-04-13T19:28:06.810604027Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.31.4\" with image id \"sha256:27be54f2b9e47d96c7e9e5ad16e26ec298c1829f31885c81a622d50472c8ac97\", repo tag \"ghcr.io/flatcar/calico/node:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\", size \"153921536\" in 4.588504201s" Apr 13 19:28:06.810732 containerd[1620]: time="2026-04-13T19:28:06.810715921Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\" returns image reference \"sha256:27be54f2b9e47d96c7e9e5ad16e26ec298c1829f31885c81a622d50472c8ac97\"" Apr 13 19:28:06.817059 containerd[1620]: time="2026-04-13T19:28:06.817018862Z" level=info msg="CreateContainer within sandbox \"e0bd23d9111fc750cb19c475c70669204ea49dc58f0187cd1d8f4839b6522e67\" for container &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,}" Apr 13 19:28:06.834582 containerd[1620]: time="2026-04-13T19:28:06.834538943Z" level=info msg="CreateContainer within sandbox \"e0bd23d9111fc750cb19c475c70669204ea49dc58f0187cd1d8f4839b6522e67\" for &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,} returns container id \"17e087be64e1549449571175c5742d71fa8cb14ce88b6fe70a131a43db84b476\"" Apr 13 19:28:06.837433 containerd[1620]: time="2026-04-13T19:28:06.837405556Z" level=info msg="StartContainer for \"17e087be64e1549449571175c5742d71fa8cb14ce88b6fe70a131a43db84b476\"" Apr 13 19:28:06.903858 containerd[1620]: time="2026-04-13T19:28:06.903742232Z" level=info msg="StartContainer for \"17e087be64e1549449571175c5742d71fa8cb14ce88b6fe70a131a43db84b476\" returns successfully" Apr 13 19:28:07.104684 kubelet[2698]: E0413 19:28:07.104537 2698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xdt82" podUID="cb288a13-6f3a-4bf1-9058-ef9f893933ae" Apr 13 19:28:07.190788 containerd[1620]: time="2026-04-13T19:28:07.190716005Z" level=info msg="shim disconnected" id=17e087be64e1549449571175c5742d71fa8cb14ce88b6fe70a131a43db84b476 namespace=k8s.io Apr 13 19:28:07.190788 containerd[1620]: time="2026-04-13T19:28:07.190785014Z" level=warning msg="cleaning up after shim disconnected" id=17e087be64e1549449571175c5742d71fa8cb14ce88b6fe70a131a43db84b476 namespace=k8s.io Apr 13 19:28:07.191059 containerd[1620]: time="2026-04-13T19:28:07.190800016Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 13 19:28:07.206390 containerd[1620]: time="2026-04-13T19:28:07.206341321Z" level=warning msg="cleanup warnings time=\"2026-04-13T19:28:07Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Apr 13 19:28:07.238709 containerd[1620]: time="2026-04-13T19:28:07.238464383Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\"" Apr 13 19:28:07.778375 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-17e087be64e1549449571175c5742d71fa8cb14ce88b6fe70a131a43db84b476-rootfs.mount: Deactivated successfully. Apr 13 19:28:09.105171 kubelet[2698]: E0413 19:28:09.105032 2698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xdt82" podUID="cb288a13-6f3a-4bf1-9058-ef9f893933ae" Apr 13 19:28:10.010183 containerd[1620]: time="2026-04-13T19:28:10.010121729Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:28:10.011611 containerd[1620]: time="2026-04-13T19:28:10.011557209Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.31.4: active requests=0, bytes read=66009216" Apr 13 19:28:10.012964 containerd[1620]: time="2026-04-13T19:28:10.012630450Z" level=info msg="ImageCreate event name:\"sha256:c10bed152367fad8c19e9400f12b748d6fbc20498086983df13e70e36f24511b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:28:10.016703 containerd[1620]: time="2026-04-13T19:28:10.016662581Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:28:10.017865 containerd[1620]: time="2026-04-13T19:28:10.017816430Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.31.4\" with image id \"sha256:c10bed152367fad8c19e9400f12b748d6fbc20498086983df13e70e36f24511b\", repo tag \"ghcr.io/flatcar/calico/cni:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\", size \"67406741\" in 2.779298241s" Apr 13 19:28:10.018026 containerd[1620]: time="2026-04-13T19:28:10.017991490Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\" returns image reference \"sha256:c10bed152367fad8c19e9400f12b748d6fbc20498086983df13e70e36f24511b\"" Apr 13 19:28:10.023275 containerd[1620]: time="2026-04-13T19:28:10.023237038Z" level=info msg="CreateContainer within sandbox \"e0bd23d9111fc750cb19c475c70669204ea49dc58f0187cd1d8f4839b6522e67\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Apr 13 19:28:10.043485 containerd[1620]: time="2026-04-13T19:28:10.043385934Z" level=info msg="CreateContainer within sandbox \"e0bd23d9111fc750cb19c475c70669204ea49dc58f0187cd1d8f4839b6522e67\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"97487fa944631df076302c25fcda3f4785e770466d69804c86c22f70b32a142c\"" Apr 13 19:28:10.044614 containerd[1620]: time="2026-04-13T19:28:10.044586028Z" level=info msg="StartContainer for \"97487fa944631df076302c25fcda3f4785e770466d69804c86c22f70b32a142c\"" Apr 13 19:28:10.099793 containerd[1620]: time="2026-04-13T19:28:10.099736885Z" level=info msg="StartContainer for \"97487fa944631df076302c25fcda3f4785e770466d69804c86c22f70b32a142c\" returns successfully" Apr 13 19:28:10.667411 kubelet[2698]: I0413 19:28:10.667378 2698 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Apr 13 19:28:10.676056 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-97487fa944631df076302c25fcda3f4785e770466d69804c86c22f70b32a142c-rootfs.mount: Deactivated successfully. Apr 13 19:28:10.768266 containerd[1620]: time="2026-04-13T19:28:10.768210069Z" level=info msg="shim disconnected" id=97487fa944631df076302c25fcda3f4785e770466d69804c86c22f70b32a142c namespace=k8s.io Apr 13 19:28:10.768973 containerd[1620]: time="2026-04-13T19:28:10.768406011Z" level=warning msg="cleaning up after shim disconnected" id=97487fa944631df076302c25fcda3f4785e770466d69804c86c22f70b32a142c namespace=k8s.io Apr 13 19:28:10.768973 containerd[1620]: time="2026-04-13T19:28:10.768422252Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 13 19:28:10.806456 containerd[1620]: time="2026-04-13T19:28:10.806358581Z" level=warning msg="cleanup warnings time=\"2026-04-13T19:28:10Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Apr 13 19:28:10.839424 kubelet[2698]: I0413 19:28:10.838244 2698 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/8159562d-f09d-4214-b84d-626c5db8278b-goldmane-key-pair\") pod \"goldmane-5b85766d88-7vqgk\" (UID: \"8159562d-f09d-4214-b84d-626c5db8278b\") " pod="calico-system/goldmane-5b85766d88-7vqgk" Apr 13 19:28:10.839424 kubelet[2698]: I0413 19:28:10.838314 2698 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/060246f3-68f3-47a5-b10e-f5d0dbb7a159-nginx-config\") pod \"whisker-74b58fcbcd-6wksg\" (UID: \"060246f3-68f3-47a5-b10e-f5d0dbb7a159\") " pod="calico-system/whisker-74b58fcbcd-6wksg" Apr 13 19:28:10.839424 kubelet[2698]: I0413 19:28:10.838350 2698 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dd51994b-5cb8-4cf8-81b2-1ad58037a2fe-config-volume\") pod \"coredns-674b8bbfcf-7z59s\" (UID: \"dd51994b-5cb8-4cf8-81b2-1ad58037a2fe\") " pod="kube-system/coredns-674b8bbfcf-7z59s" Apr 13 19:28:10.839424 kubelet[2698]: I0413 19:28:10.838465 2698 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/781d0594-7e8f-4f56-9e23-737cbe5d8293-config-volume\") pod \"coredns-674b8bbfcf-ft6d4\" (UID: \"781d0594-7e8f-4f56-9e23-737cbe5d8293\") " pod="kube-system/coredns-674b8bbfcf-ft6d4" Apr 13 19:28:10.839424 kubelet[2698]: I0413 19:28:10.838499 2698 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h9tsk\" (UniqueName: \"kubernetes.io/projected/dd51994b-5cb8-4cf8-81b2-1ad58037a2fe-kube-api-access-h9tsk\") pod \"coredns-674b8bbfcf-7z59s\" (UID: \"dd51994b-5cb8-4cf8-81b2-1ad58037a2fe\") " pod="kube-system/coredns-674b8bbfcf-7z59s" Apr 13 19:28:10.840123 kubelet[2698]: I0413 19:28:10.838531 2698 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/5241b13b-999d-4851-9b4e-47561517f354-calico-apiserver-certs\") pod \"calico-apiserver-7885d7c4d8-kjddq\" (UID: \"5241b13b-999d-4851-9b4e-47561517f354\") " pod="calico-system/calico-apiserver-7885d7c4d8-kjddq" Apr 13 19:28:10.840123 kubelet[2698]: I0413 19:28:10.838557 2698 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8159562d-f09d-4214-b84d-626c5db8278b-goldmane-ca-bundle\") pod \"goldmane-5b85766d88-7vqgk\" (UID: \"8159562d-f09d-4214-b84d-626c5db8278b\") " pod="calico-system/goldmane-5b85766d88-7vqgk" Apr 13 19:28:10.840123 kubelet[2698]: I0413 19:28:10.838588 2698 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/060246f3-68f3-47a5-b10e-f5d0dbb7a159-whisker-ca-bundle\") pod \"whisker-74b58fcbcd-6wksg\" (UID: \"060246f3-68f3-47a5-b10e-f5d0dbb7a159\") " pod="calico-system/whisker-74b58fcbcd-6wksg" Apr 13 19:28:10.840123 kubelet[2698]: I0413 19:28:10.838614 2698 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xqtrh\" (UniqueName: \"kubernetes.io/projected/060246f3-68f3-47a5-b10e-f5d0dbb7a159-kube-api-access-xqtrh\") pod \"whisker-74b58fcbcd-6wksg\" (UID: \"060246f3-68f3-47a5-b10e-f5d0dbb7a159\") " pod="calico-system/whisker-74b58fcbcd-6wksg" Apr 13 19:28:10.840123 kubelet[2698]: I0413 19:28:10.838645 2698 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-79g5d\" (UniqueName: \"kubernetes.io/projected/5241b13b-999d-4851-9b4e-47561517f354-kube-api-access-79g5d\") pod \"calico-apiserver-7885d7c4d8-kjddq\" (UID: \"5241b13b-999d-4851-9b4e-47561517f354\") " pod="calico-system/calico-apiserver-7885d7c4d8-kjddq" Apr 13 19:28:10.840286 kubelet[2698]: I0413 19:28:10.838672 2698 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tcnnk\" (UniqueName: \"kubernetes.io/projected/8dfd1508-5560-4545-9951-14512e76963d-kube-api-access-tcnnk\") pod \"calico-kube-controllers-669bfd5bbc-9897g\" (UID: \"8dfd1508-5560-4545-9951-14512e76963d\") " pod="calico-system/calico-kube-controllers-669bfd5bbc-9897g" Apr 13 19:28:10.840286 kubelet[2698]: I0413 19:28:10.838702 2698 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l4fxh\" (UniqueName: \"kubernetes.io/projected/5e2158f3-f3dc-4de6-bbab-4b66f3e43bd9-kube-api-access-l4fxh\") pod \"calico-apiserver-7885d7c4d8-s4bvw\" (UID: \"5e2158f3-f3dc-4de6-bbab-4b66f3e43bd9\") " pod="calico-system/calico-apiserver-7885d7c4d8-s4bvw" Apr 13 19:28:10.840286 kubelet[2698]: I0413 19:28:10.838728 2698 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tbq66\" (UniqueName: \"kubernetes.io/projected/8159562d-f09d-4214-b84d-626c5db8278b-kube-api-access-tbq66\") pod \"goldmane-5b85766d88-7vqgk\" (UID: \"8159562d-f09d-4214-b84d-626c5db8278b\") " pod="calico-system/goldmane-5b85766d88-7vqgk" Apr 13 19:28:10.840286 kubelet[2698]: I0413 19:28:10.838763 2698 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8dfd1508-5560-4545-9951-14512e76963d-tigera-ca-bundle\") pod \"calico-kube-controllers-669bfd5bbc-9897g\" (UID: \"8dfd1508-5560-4545-9951-14512e76963d\") " pod="calico-system/calico-kube-controllers-669bfd5bbc-9897g" Apr 13 19:28:10.840286 kubelet[2698]: I0413 19:28:10.838792 2698 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/5e2158f3-f3dc-4de6-bbab-4b66f3e43bd9-calico-apiserver-certs\") pod \"calico-apiserver-7885d7c4d8-s4bvw\" (UID: \"5e2158f3-f3dc-4de6-bbab-4b66f3e43bd9\") " pod="calico-system/calico-apiserver-7885d7c4d8-s4bvw" Apr 13 19:28:10.840429 kubelet[2698]: I0413 19:28:10.838829 2698 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-th5p9\" (UniqueName: \"kubernetes.io/projected/781d0594-7e8f-4f56-9e23-737cbe5d8293-kube-api-access-th5p9\") pod \"coredns-674b8bbfcf-ft6d4\" (UID: \"781d0594-7e8f-4f56-9e23-737cbe5d8293\") " pod="kube-system/coredns-674b8bbfcf-ft6d4" Apr 13 19:28:10.840429 kubelet[2698]: I0413 19:28:10.838857 2698 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8159562d-f09d-4214-b84d-626c5db8278b-config\") pod \"goldmane-5b85766d88-7vqgk\" (UID: \"8159562d-f09d-4214-b84d-626c5db8278b\") " pod="calico-system/goldmane-5b85766d88-7vqgk" Apr 13 19:28:10.840429 kubelet[2698]: I0413 19:28:10.838883 2698 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/060246f3-68f3-47a5-b10e-f5d0dbb7a159-whisker-backend-key-pair\") pod \"whisker-74b58fcbcd-6wksg\" (UID: \"060246f3-68f3-47a5-b10e-f5d0dbb7a159\") " pod="calico-system/whisker-74b58fcbcd-6wksg" Apr 13 19:28:11.082077 containerd[1620]: time="2026-04-13T19:28:11.081848441Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-74b58fcbcd-6wksg,Uid:060246f3-68f3-47a5-b10e-f5d0dbb7a159,Namespace:calico-system,Attempt:0,}" Apr 13 19:28:11.089895 containerd[1620]: time="2026-04-13T19:28:11.089494308Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-7z59s,Uid:dd51994b-5cb8-4cf8-81b2-1ad58037a2fe,Namespace:kube-system,Attempt:0,}" Apr 13 19:28:11.091521 containerd[1620]: time="2026-04-13T19:28:11.091341068Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-7vqgk,Uid:8159562d-f09d-4214-b84d-626c5db8278b,Namespace:calico-system,Attempt:0,}" Apr 13 19:28:11.092210 containerd[1620]: time="2026-04-13T19:28:11.092170798Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7885d7c4d8-kjddq,Uid:5241b13b-999d-4851-9b4e-47561517f354,Namespace:calico-system,Attempt:0,}" Apr 13 19:28:11.121604 containerd[1620]: time="2026-04-13T19:28:11.121344952Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xdt82,Uid:cb288a13-6f3a-4bf1-9058-ef9f893933ae,Namespace:calico-system,Attempt:0,}" Apr 13 19:28:11.130418 containerd[1620]: time="2026-04-13T19:28:11.130346206Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7885d7c4d8-s4bvw,Uid:5e2158f3-f3dc-4de6-bbab-4b66f3e43bd9,Namespace:calico-system,Attempt:0,}" Apr 13 19:28:11.130616 containerd[1620]: time="2026-04-13T19:28:11.130593192Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-ft6d4,Uid:781d0594-7e8f-4f56-9e23-737cbe5d8293,Namespace:kube-system,Attempt:0,}" Apr 13 19:28:11.130763 containerd[1620]: time="2026-04-13T19:28:11.130743808Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-669bfd5bbc-9897g,Uid:8dfd1508-5560-4545-9951-14512e76963d,Namespace:calico-system,Attempt:0,}" Apr 13 19:28:11.361642 containerd[1620]: time="2026-04-13T19:28:11.361183806Z" level=info msg="CreateContainer within sandbox \"e0bd23d9111fc750cb19c475c70669204ea49dc58f0187cd1d8f4839b6522e67\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Apr 13 19:28:11.399655 containerd[1620]: time="2026-04-13T19:28:11.399597280Z" level=error msg="Failed to destroy network for sandbox \"7ebe2b83a4be97f70757917fbe7fe38dc310050797dd267576b135397b02ab5e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 19:28:11.400422 containerd[1620]: time="2026-04-13T19:28:11.400384285Z" level=error msg="encountered an error cleaning up failed sandbox \"7ebe2b83a4be97f70757917fbe7fe38dc310050797dd267576b135397b02ab5e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 19:28:11.400972 containerd[1620]: time="2026-04-13T19:28:11.400438891Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7885d7c4d8-kjddq,Uid:5241b13b-999d-4851-9b4e-47561517f354,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"7ebe2b83a4be97f70757917fbe7fe38dc310050797dd267576b135397b02ab5e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 19:28:11.401897 kubelet[2698]: E0413 19:28:11.401857 2698 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7ebe2b83a4be97f70757917fbe7fe38dc310050797dd267576b135397b02ab5e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 19:28:11.403646 kubelet[2698]: E0413 19:28:11.402441 2698 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7ebe2b83a4be97f70757917fbe7fe38dc310050797dd267576b135397b02ab5e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-7885d7c4d8-kjddq" Apr 13 19:28:11.403646 kubelet[2698]: E0413 19:28:11.402472 2698 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7ebe2b83a4be97f70757917fbe7fe38dc310050797dd267576b135397b02ab5e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-7885d7c4d8-kjddq" Apr 13 19:28:11.403646 kubelet[2698]: E0413 19:28:11.402525 2698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7885d7c4d8-kjddq_calico-system(5241b13b-999d-4851-9b4e-47561517f354)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7885d7c4d8-kjddq_calico-system(5241b13b-999d-4851-9b4e-47561517f354)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7ebe2b83a4be97f70757917fbe7fe38dc310050797dd267576b135397b02ab5e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-7885d7c4d8-kjddq" podUID="5241b13b-999d-4851-9b4e-47561517f354" Apr 13 19:28:11.430857 containerd[1620]: time="2026-04-13T19:28:11.430791373Z" level=info msg="CreateContainer within sandbox \"e0bd23d9111fc750cb19c475c70669204ea49dc58f0187cd1d8f4839b6522e67\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"da1bbe2d1c92643c92a5b11db6ffdfe21df82370889164033c5f873ed3dc2c6e\"" Apr 13 19:28:11.433997 containerd[1620]: time="2026-04-13T19:28:11.432912242Z" level=info msg="StartContainer for \"da1bbe2d1c92643c92a5b11db6ffdfe21df82370889164033c5f873ed3dc2c6e\"" Apr 13 19:28:11.481967 containerd[1620]: time="2026-04-13T19:28:11.481850494Z" level=error msg="Failed to destroy network for sandbox \"eeadffa11b4c751f5e76c23a1a02279932fff7ab4b1d30bae52ca921272ab763\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 19:28:11.482828 containerd[1620]: time="2026-04-13T19:28:11.482641379Z" level=error msg="encountered an error cleaning up failed sandbox \"eeadffa11b4c751f5e76c23a1a02279932fff7ab4b1d30bae52ca921272ab763\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 19:28:11.482828 containerd[1620]: time="2026-04-13T19:28:11.482698906Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-7z59s,Uid:dd51994b-5cb8-4cf8-81b2-1ad58037a2fe,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"eeadffa11b4c751f5e76c23a1a02279932fff7ab4b1d30bae52ca921272ab763\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 19:28:11.482990 kubelet[2698]: E0413 19:28:11.482911 2698 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eeadffa11b4c751f5e76c23a1a02279932fff7ab4b1d30bae52ca921272ab763\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 19:28:11.483091 kubelet[2698]: E0413 19:28:11.483044 2698 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eeadffa11b4c751f5e76c23a1a02279932fff7ab4b1d30bae52ca921272ab763\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-7z59s" Apr 13 19:28:11.483091 kubelet[2698]: E0413 19:28:11.483080 2698 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eeadffa11b4c751f5e76c23a1a02279932fff7ab4b1d30bae52ca921272ab763\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-7z59s" Apr 13 19:28:11.483166 kubelet[2698]: E0413 19:28:11.483132 2698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-7z59s_kube-system(dd51994b-5cb8-4cf8-81b2-1ad58037a2fe)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-7z59s_kube-system(dd51994b-5cb8-4cf8-81b2-1ad58037a2fe)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"eeadffa11b4c751f5e76c23a1a02279932fff7ab4b1d30bae52ca921272ab763\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-7z59s" podUID="dd51994b-5cb8-4cf8-81b2-1ad58037a2fe" Apr 13 19:28:11.484693 containerd[1620]: time="2026-04-13T19:28:11.483680332Z" level=error msg="Failed to destroy network for sandbox \"1c6f805bbb98b1f7adb756761a43fc0f7ad765c6738e53a622dde51ed3ff6777\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 19:28:11.484693 containerd[1620]: time="2026-04-13T19:28:11.484276876Z" level=error msg="encountered an error cleaning up failed sandbox \"1c6f805bbb98b1f7adb756761a43fc0f7ad765c6738e53a622dde51ed3ff6777\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 19:28:11.484693 containerd[1620]: time="2026-04-13T19:28:11.484324681Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-74b58fcbcd-6wksg,Uid:060246f3-68f3-47a5-b10e-f5d0dbb7a159,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"1c6f805bbb98b1f7adb756761a43fc0f7ad765c6738e53a622dde51ed3ff6777\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 19:28:11.485125 kubelet[2698]: E0413 19:28:11.484898 2698 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1c6f805bbb98b1f7adb756761a43fc0f7ad765c6738e53a622dde51ed3ff6777\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 19:28:11.485125 kubelet[2698]: E0413 19:28:11.485091 2698 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1c6f805bbb98b1f7adb756761a43fc0f7ad765c6738e53a622dde51ed3ff6777\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-74b58fcbcd-6wksg" Apr 13 19:28:11.485125 kubelet[2698]: E0413 19:28:11.485114 2698 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1c6f805bbb98b1f7adb756761a43fc0f7ad765c6738e53a622dde51ed3ff6777\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-74b58fcbcd-6wksg" Apr 13 19:28:11.485215 kubelet[2698]: E0413 19:28:11.485166 2698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-74b58fcbcd-6wksg_calico-system(060246f3-68f3-47a5-b10e-f5d0dbb7a159)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-74b58fcbcd-6wksg_calico-system(060246f3-68f3-47a5-b10e-f5d0dbb7a159)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1c6f805bbb98b1f7adb756761a43fc0f7ad765c6738e53a622dde51ed3ff6777\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-74b58fcbcd-6wksg" podUID="060246f3-68f3-47a5-b10e-f5d0dbb7a159" Apr 13 19:28:11.496738 containerd[1620]: time="2026-04-13T19:28:11.496602209Z" level=error msg="Failed to destroy network for sandbox \"dfa1ba7410df325081917add3ceee636b99e0e631662dae097b86cd8911e83d1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 19:28:11.497183 containerd[1620]: time="2026-04-13T19:28:11.497152628Z" level=error msg="encountered an error cleaning up failed sandbox \"dfa1ba7410df325081917add3ceee636b99e0e631662dae097b86cd8911e83d1\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 19:28:11.498411 containerd[1620]: time="2026-04-13T19:28:11.498209063Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-7vqgk,Uid:8159562d-f09d-4214-b84d-626c5db8278b,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"dfa1ba7410df325081917add3ceee636b99e0e631662dae097b86cd8911e83d1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 19:28:11.498526 kubelet[2698]: E0413 19:28:11.498441 2698 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dfa1ba7410df325081917add3ceee636b99e0e631662dae097b86cd8911e83d1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 19:28:11.498526 kubelet[2698]: E0413 19:28:11.498502 2698 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dfa1ba7410df325081917add3ceee636b99e0e631662dae097b86cd8911e83d1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-5b85766d88-7vqgk" Apr 13 19:28:11.498605 kubelet[2698]: E0413 19:28:11.498527 2698 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dfa1ba7410df325081917add3ceee636b99e0e631662dae097b86cd8911e83d1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-5b85766d88-7vqgk" Apr 13 19:28:11.498605 kubelet[2698]: E0413 19:28:11.498579 2698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-5b85766d88-7vqgk_calico-system(8159562d-f09d-4214-b84d-626c5db8278b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-5b85766d88-7vqgk_calico-system(8159562d-f09d-4214-b84d-626c5db8278b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"dfa1ba7410df325081917add3ceee636b99e0e631662dae097b86cd8911e83d1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-5b85766d88-7vqgk" podUID="8159562d-f09d-4214-b84d-626c5db8278b" Apr 13 19:28:11.515387 containerd[1620]: time="2026-04-13T19:28:11.515324953Z" level=error msg="Failed to destroy network for sandbox \"92e1b722d97e40a63aea6ba384e730458e3267b6d027fb8f09b706122c5d0c7b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 19:28:11.516466 containerd[1620]: time="2026-04-13T19:28:11.515975224Z" level=error msg="encountered an error cleaning up failed sandbox \"92e1b722d97e40a63aea6ba384e730458e3267b6d027fb8f09b706122c5d0c7b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 19:28:11.516466 containerd[1620]: time="2026-04-13T19:28:11.516076635Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xdt82,Uid:cb288a13-6f3a-4bf1-9058-ef9f893933ae,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"92e1b722d97e40a63aea6ba384e730458e3267b6d027fb8f09b706122c5d0c7b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 19:28:11.516615 kubelet[2698]: E0413 19:28:11.516292 2698 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"92e1b722d97e40a63aea6ba384e730458e3267b6d027fb8f09b706122c5d0c7b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 19:28:11.516615 kubelet[2698]: E0413 19:28:11.516350 2698 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"92e1b722d97e40a63aea6ba384e730458e3267b6d027fb8f09b706122c5d0c7b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-xdt82" Apr 13 19:28:11.516615 kubelet[2698]: E0413 19:28:11.516371 2698 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"92e1b722d97e40a63aea6ba384e730458e3267b6d027fb8f09b706122c5d0c7b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-xdt82" Apr 13 19:28:11.518171 kubelet[2698]: E0413 19:28:11.516423 2698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-xdt82_calico-system(cb288a13-6f3a-4bf1-9058-ef9f893933ae)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-xdt82_calico-system(cb288a13-6f3a-4bf1-9058-ef9f893933ae)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"92e1b722d97e40a63aea6ba384e730458e3267b6d027fb8f09b706122c5d0c7b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-xdt82" podUID="cb288a13-6f3a-4bf1-9058-ef9f893933ae" Apr 13 19:28:11.520915 containerd[1620]: time="2026-04-13T19:28:11.520763421Z" level=error msg="Failed to destroy network for sandbox \"270a617cd62a55f5d50a4b18cc151bd74e15395c83e4edd901d7cc83f7b91b4c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 19:28:11.521593 containerd[1620]: time="2026-04-13T19:28:11.521553307Z" level=error msg="encountered an error cleaning up failed sandbox \"270a617cd62a55f5d50a4b18cc151bd74e15395c83e4edd901d7cc83f7b91b4c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 19:28:11.521657 containerd[1620]: time="2026-04-13T19:28:11.521617074Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-ft6d4,Uid:781d0594-7e8f-4f56-9e23-737cbe5d8293,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"270a617cd62a55f5d50a4b18cc151bd74e15395c83e4edd901d7cc83f7b91b4c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 19:28:11.521956 kubelet[2698]: E0413 19:28:11.521852 2698 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"270a617cd62a55f5d50a4b18cc151bd74e15395c83e4edd901d7cc83f7b91b4c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 19:28:11.521956 kubelet[2698]: E0413 19:28:11.521903 2698 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"270a617cd62a55f5d50a4b18cc151bd74e15395c83e4edd901d7cc83f7b91b4c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-ft6d4" Apr 13 19:28:11.521956 kubelet[2698]: E0413 19:28:11.521922 2698 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"270a617cd62a55f5d50a4b18cc151bd74e15395c83e4edd901d7cc83f7b91b4c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-ft6d4" Apr 13 19:28:11.522122 kubelet[2698]: E0413 19:28:11.521985 2698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-ft6d4_kube-system(781d0594-7e8f-4f56-9e23-737cbe5d8293)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-ft6d4_kube-system(781d0594-7e8f-4f56-9e23-737cbe5d8293)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"270a617cd62a55f5d50a4b18cc151bd74e15395c83e4edd901d7cc83f7b91b4c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-ft6d4" podUID="781d0594-7e8f-4f56-9e23-737cbe5d8293" Apr 13 19:28:11.543524 containerd[1620]: time="2026-04-13T19:28:11.543428152Z" level=error msg="Failed to destroy network for sandbox \"a0ee31723bd72bce2343f0718473064c9fd74311836dbc44ae2bc10710eee40f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 19:28:11.543886 containerd[1620]: time="2026-04-13T19:28:11.543859799Z" level=error msg="encountered an error cleaning up failed sandbox \"a0ee31723bd72bce2343f0718473064c9fd74311836dbc44ae2bc10710eee40f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 19:28:11.543971 containerd[1620]: time="2026-04-13T19:28:11.543923486Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-669bfd5bbc-9897g,Uid:8dfd1508-5560-4545-9951-14512e76963d,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a0ee31723bd72bce2343f0718473064c9fd74311836dbc44ae2bc10710eee40f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 19:28:11.545761 kubelet[2698]: E0413 19:28:11.545700 2698 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a0ee31723bd72bce2343f0718473064c9fd74311836dbc44ae2bc10710eee40f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 19:28:11.545853 kubelet[2698]: E0413 19:28:11.545772 2698 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a0ee31723bd72bce2343f0718473064c9fd74311836dbc44ae2bc10710eee40f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-669bfd5bbc-9897g" Apr 13 19:28:11.545853 kubelet[2698]: E0413 19:28:11.545800 2698 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a0ee31723bd72bce2343f0718473064c9fd74311836dbc44ae2bc10710eee40f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-669bfd5bbc-9897g" Apr 13 19:28:11.546055 kubelet[2698]: E0413 19:28:11.545848 2698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-669bfd5bbc-9897g_calico-system(8dfd1508-5560-4545-9951-14512e76963d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-669bfd5bbc-9897g_calico-system(8dfd1508-5560-4545-9951-14512e76963d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a0ee31723bd72bce2343f0718473064c9fd74311836dbc44ae2bc10710eee40f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-669bfd5bbc-9897g" podUID="8dfd1508-5560-4545-9951-14512e76963d" Apr 13 19:28:11.552920 containerd[1620]: time="2026-04-13T19:28:11.551984757Z" level=error msg="Failed to destroy network for sandbox \"f0ca69d536eeedfb58c75ba117034fcd40a197cb7274707616e93c22b87fdae7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 19:28:11.559834 containerd[1620]: time="2026-04-13T19:28:11.557383341Z" level=error msg="encountered an error cleaning up failed sandbox \"f0ca69d536eeedfb58c75ba117034fcd40a197cb7274707616e93c22b87fdae7\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 19:28:11.559834 containerd[1620]: time="2026-04-13T19:28:11.557853952Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7885d7c4d8-s4bvw,Uid:5e2158f3-f3dc-4de6-bbab-4b66f3e43bd9,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f0ca69d536eeedfb58c75ba117034fcd40a197cb7274707616e93c22b87fdae7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 19:28:11.560062 kubelet[2698]: E0413 19:28:11.558620 2698 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f0ca69d536eeedfb58c75ba117034fcd40a197cb7274707616e93c22b87fdae7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 19:28:11.560062 kubelet[2698]: E0413 19:28:11.558776 2698 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f0ca69d536eeedfb58c75ba117034fcd40a197cb7274707616e93c22b87fdae7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-7885d7c4d8-s4bvw" Apr 13 19:28:11.560062 kubelet[2698]: E0413 19:28:11.558803 2698 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f0ca69d536eeedfb58c75ba117034fcd40a197cb7274707616e93c22b87fdae7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-7885d7c4d8-s4bvw" Apr 13 19:28:11.560155 kubelet[2698]: E0413 19:28:11.558879 2698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7885d7c4d8-s4bvw_calico-system(5e2158f3-f3dc-4de6-bbab-4b66f3e43bd9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7885d7c4d8-s4bvw_calico-system(5e2158f3-f3dc-4de6-bbab-4b66f3e43bd9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f0ca69d536eeedfb58c75ba117034fcd40a197cb7274707616e93c22b87fdae7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-7885d7c4d8-s4bvw" podUID="5e2158f3-f3dc-4de6-bbab-4b66f3e43bd9" Apr 13 19:28:11.599556 containerd[1620]: time="2026-04-13T19:28:11.599442849Z" level=info msg="StartContainer for \"da1bbe2d1c92643c92a5b11db6ffdfe21df82370889164033c5f873ed3dc2c6e\" returns successfully" Apr 13 19:28:12.042396 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7ebe2b83a4be97f70757917fbe7fe38dc310050797dd267576b135397b02ab5e-shm.mount: Deactivated successfully. Apr 13 19:28:12.042559 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-eeadffa11b4c751f5e76c23a1a02279932fff7ab4b1d30bae52ca921272ab763-shm.mount: Deactivated successfully. Apr 13 19:28:12.042640 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1c6f805bbb98b1f7adb756761a43fc0f7ad765c6738e53a622dde51ed3ff6777-shm.mount: Deactivated successfully. Apr 13 19:28:12.310757 kubelet[2698]: I0413 19:28:12.310643 2698 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dfa1ba7410df325081917add3ceee636b99e0e631662dae097b86cd8911e83d1" Apr 13 19:28:12.314965 containerd[1620]: time="2026-04-13T19:28:12.313615581Z" level=info msg="StopPodSandbox for \"dfa1ba7410df325081917add3ceee636b99e0e631662dae097b86cd8911e83d1\"" Apr 13 19:28:12.314965 containerd[1620]: time="2026-04-13T19:28:12.313869128Z" level=info msg="Ensure that sandbox dfa1ba7410df325081917add3ceee636b99e0e631662dae097b86cd8911e83d1 in task-service has been cleanup successfully" Apr 13 19:28:12.322962 kubelet[2698]: I0413 19:28:12.321058 2698 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a0ee31723bd72bce2343f0718473064c9fd74311836dbc44ae2bc10710eee40f" Apr 13 19:28:12.324398 containerd[1620]: time="2026-04-13T19:28:12.323211504Z" level=info msg="StopPodSandbox for \"a0ee31723bd72bce2343f0718473064c9fd74311836dbc44ae2bc10710eee40f\"" Apr 13 19:28:12.332580 kubelet[2698]: I0413 19:28:12.330774 2698 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7ebe2b83a4be97f70757917fbe7fe38dc310050797dd267576b135397b02ab5e" Apr 13 19:28:12.333953 containerd[1620]: time="2026-04-13T19:28:12.333874138Z" level=info msg="StopPodSandbox for \"7ebe2b83a4be97f70757917fbe7fe38dc310050797dd267576b135397b02ab5e\"" Apr 13 19:28:12.334184 containerd[1620]: time="2026-04-13T19:28:12.334156368Z" level=info msg="Ensure that sandbox 7ebe2b83a4be97f70757917fbe7fe38dc310050797dd267576b135397b02ab5e in task-service has been cleanup successfully" Apr 13 19:28:12.341841 kubelet[2698]: I0413 19:28:12.341744 2698 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f0ca69d536eeedfb58c75ba117034fcd40a197cb7274707616e93c22b87fdae7" Apr 13 19:28:12.349068 containerd[1620]: time="2026-04-13T19:28:12.349025922Z" level=info msg="StopPodSandbox for \"f0ca69d536eeedfb58c75ba117034fcd40a197cb7274707616e93c22b87fdae7\"" Apr 13 19:28:12.349289 containerd[1620]: time="2026-04-13T19:28:12.349220302Z" level=info msg="Ensure that sandbox f0ca69d536eeedfb58c75ba117034fcd40a197cb7274707616e93c22b87fdae7 in task-service has been cleanup successfully" Apr 13 19:28:12.352370 containerd[1620]: time="2026-04-13T19:28:12.352330627Z" level=info msg="Ensure that sandbox a0ee31723bd72bce2343f0718473064c9fd74311836dbc44ae2bc10710eee40f in task-service has been cleanup successfully" Apr 13 19:28:12.359513 kubelet[2698]: I0413 19:28:12.359475 2698 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="92e1b722d97e40a63aea6ba384e730458e3267b6d027fb8f09b706122c5d0c7b" Apr 13 19:28:12.363347 containerd[1620]: time="2026-04-13T19:28:12.362324832Z" level=info msg="StopPodSandbox for \"92e1b722d97e40a63aea6ba384e730458e3267b6d027fb8f09b706122c5d0c7b\"" Apr 13 19:28:12.363347 containerd[1620]: time="2026-04-13T19:28:12.362501610Z" level=info msg="Ensure that sandbox 92e1b722d97e40a63aea6ba384e730458e3267b6d027fb8f09b706122c5d0c7b in task-service has been cleanup successfully" Apr 13 19:28:12.366234 kubelet[2698]: I0413 19:28:12.366198 2698 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="270a617cd62a55f5d50a4b18cc151bd74e15395c83e4edd901d7cc83f7b91b4c" Apr 13 19:28:12.371511 containerd[1620]: time="2026-04-13T19:28:12.371273047Z" level=info msg="StopPodSandbox for \"270a617cd62a55f5d50a4b18cc151bd74e15395c83e4edd901d7cc83f7b91b4c\"" Apr 13 19:28:12.371852 containerd[1620]: time="2026-04-13T19:28:12.371823304Z" level=info msg="Ensure that sandbox 270a617cd62a55f5d50a4b18cc151bd74e15395c83e4edd901d7cc83f7b91b4c in task-service has been cleanup successfully" Apr 13 19:28:12.378840 kubelet[2698]: I0413 19:28:12.378796 2698 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1c6f805bbb98b1f7adb756761a43fc0f7ad765c6738e53a622dde51ed3ff6777" Apr 13 19:28:12.381527 containerd[1620]: time="2026-04-13T19:28:12.381335978Z" level=info msg="StopPodSandbox for \"1c6f805bbb98b1f7adb756761a43fc0f7ad765c6738e53a622dde51ed3ff6777\"" Apr 13 19:28:12.384635 containerd[1620]: time="2026-04-13T19:28:12.384559355Z" level=info msg="Ensure that sandbox 1c6f805bbb98b1f7adb756761a43fc0f7ad765c6738e53a622dde51ed3ff6777 in task-service has been cleanup successfully" Apr 13 19:28:12.390516 kubelet[2698]: I0413 19:28:12.390472 2698 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eeadffa11b4c751f5e76c23a1a02279932fff7ab4b1d30bae52ca921272ab763" Apr 13 19:28:12.393164 containerd[1620]: time="2026-04-13T19:28:12.392548790Z" level=info msg="StopPodSandbox for \"eeadffa11b4c751f5e76c23a1a02279932fff7ab4b1d30bae52ca921272ab763\"" Apr 13 19:28:12.393164 containerd[1620]: time="2026-04-13T19:28:12.392760772Z" level=info msg="Ensure that sandbox eeadffa11b4c751f5e76c23a1a02279932fff7ab4b1d30bae52ca921272ab763 in task-service has been cleanup successfully" Apr 13 19:28:12.596048 kubelet[2698]: I0413 19:28:12.594385 2698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-sdgtk" podStartSLOduration=3.019815873 podStartE2EDuration="15.594362882s" podCreationTimestamp="2026-04-13 19:27:57 +0000 UTC" firstStartedPulling="2026-04-13 19:27:57.444901444 +0000 UTC m=+19.490228374" lastFinishedPulling="2026-04-13 19:28:10.019448453 +0000 UTC m=+32.064775383" observedRunningTime="2026-04-13 19:28:12.341016445 +0000 UTC m=+34.386343375" watchObservedRunningTime="2026-04-13 19:28:12.594362882 +0000 UTC m=+34.639689812" Apr 13 19:28:12.841625 containerd[1620]: 2026-04-13 19:28:12.643 [INFO][3937] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="270a617cd62a55f5d50a4b18cc151bd74e15395c83e4edd901d7cc83f7b91b4c" Apr 13 19:28:12.841625 containerd[1620]: 2026-04-13 19:28:12.643 [INFO][3937] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="270a617cd62a55f5d50a4b18cc151bd74e15395c83e4edd901d7cc83f7b91b4c" iface="eth0" netns="/var/run/netns/cni-1b44b0c0-adfc-a570-52f3-a5c93967b39f" Apr 13 19:28:12.841625 containerd[1620]: 2026-04-13 19:28:12.645 [INFO][3937] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="270a617cd62a55f5d50a4b18cc151bd74e15395c83e4edd901d7cc83f7b91b4c" iface="eth0" netns="/var/run/netns/cni-1b44b0c0-adfc-a570-52f3-a5c93967b39f" Apr 13 19:28:12.841625 containerd[1620]: 2026-04-13 19:28:12.645 [INFO][3937] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="270a617cd62a55f5d50a4b18cc151bd74e15395c83e4edd901d7cc83f7b91b4c" iface="eth0" netns="/var/run/netns/cni-1b44b0c0-adfc-a570-52f3-a5c93967b39f" Apr 13 19:28:12.841625 containerd[1620]: 2026-04-13 19:28:12.645 [INFO][3937] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="270a617cd62a55f5d50a4b18cc151bd74e15395c83e4edd901d7cc83f7b91b4c" Apr 13 19:28:12.841625 containerd[1620]: 2026-04-13 19:28:12.645 [INFO][3937] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="270a617cd62a55f5d50a4b18cc151bd74e15395c83e4edd901d7cc83f7b91b4c" Apr 13 19:28:12.841625 containerd[1620]: 2026-04-13 19:28:12.807 [INFO][3992] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="270a617cd62a55f5d50a4b18cc151bd74e15395c83e4edd901d7cc83f7b91b4c" HandleID="k8s-pod-network.270a617cd62a55f5d50a4b18cc151bd74e15395c83e4edd901d7cc83f7b91b4c" Workload="ci--4081--3--7--8--01d4258341-k8s-coredns--674b8bbfcf--ft6d4-eth0" Apr 13 19:28:12.841625 containerd[1620]: 2026-04-13 19:28:12.808 [INFO][3992] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 19:28:12.841625 containerd[1620]: 2026-04-13 19:28:12.808 [INFO][3992] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 19:28:12.841625 containerd[1620]: 2026-04-13 19:28:12.820 [WARNING][3992] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="270a617cd62a55f5d50a4b18cc151bd74e15395c83e4edd901d7cc83f7b91b4c" HandleID="k8s-pod-network.270a617cd62a55f5d50a4b18cc151bd74e15395c83e4edd901d7cc83f7b91b4c" Workload="ci--4081--3--7--8--01d4258341-k8s-coredns--674b8bbfcf--ft6d4-eth0" Apr 13 19:28:12.841625 containerd[1620]: 2026-04-13 19:28:12.820 [INFO][3992] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="270a617cd62a55f5d50a4b18cc151bd74e15395c83e4edd901d7cc83f7b91b4c" HandleID="k8s-pod-network.270a617cd62a55f5d50a4b18cc151bd74e15395c83e4edd901d7cc83f7b91b4c" Workload="ci--4081--3--7--8--01d4258341-k8s-coredns--674b8bbfcf--ft6d4-eth0" Apr 13 19:28:12.841625 containerd[1620]: 2026-04-13 19:28:12.822 [INFO][3992] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 19:28:12.841625 containerd[1620]: 2026-04-13 19:28:12.829 [INFO][3937] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="270a617cd62a55f5d50a4b18cc151bd74e15395c83e4edd901d7cc83f7b91b4c" Apr 13 19:28:12.844276 containerd[1620]: time="2026-04-13T19:28:12.843173525Z" level=info msg="TearDown network for sandbox \"270a617cd62a55f5d50a4b18cc151bd74e15395c83e4edd901d7cc83f7b91b4c\" successfully" Apr 13 19:28:12.844276 containerd[1620]: time="2026-04-13T19:28:12.844187911Z" level=info msg="StopPodSandbox for \"270a617cd62a55f5d50a4b18cc151bd74e15395c83e4edd901d7cc83f7b91b4c\" returns successfully" Apr 13 19:28:12.846458 containerd[1620]: time="2026-04-13T19:28:12.846266288Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-ft6d4,Uid:781d0594-7e8f-4f56-9e23-737cbe5d8293,Namespace:kube-system,Attempt:1,}" Apr 13 19:28:12.847973 systemd[1]: run-netns-cni\x2d1b44b0c0\x2dadfc\x2da570\x2d52f3\x2da5c93967b39f.mount: Deactivated successfully. Apr 13 19:28:12.876139 containerd[1620]: 2026-04-13 19:28:12.634 [INFO][3883] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="a0ee31723bd72bce2343f0718473064c9fd74311836dbc44ae2bc10710eee40f" Apr 13 19:28:12.876139 containerd[1620]: 2026-04-13 19:28:12.642 [INFO][3883] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="a0ee31723bd72bce2343f0718473064c9fd74311836dbc44ae2bc10710eee40f" iface="eth0" netns="/var/run/netns/cni-9188b4a0-f72a-7bea-696b-5fb5c0c9169f" Apr 13 19:28:12.876139 containerd[1620]: 2026-04-13 19:28:12.645 [INFO][3883] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="a0ee31723bd72bce2343f0718473064c9fd74311836dbc44ae2bc10710eee40f" iface="eth0" netns="/var/run/netns/cni-9188b4a0-f72a-7bea-696b-5fb5c0c9169f" Apr 13 19:28:12.876139 containerd[1620]: 2026-04-13 19:28:12.646 [INFO][3883] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="a0ee31723bd72bce2343f0718473064c9fd74311836dbc44ae2bc10710eee40f" iface="eth0" netns="/var/run/netns/cni-9188b4a0-f72a-7bea-696b-5fb5c0c9169f" Apr 13 19:28:12.876139 containerd[1620]: 2026-04-13 19:28:12.646 [INFO][3883] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="a0ee31723bd72bce2343f0718473064c9fd74311836dbc44ae2bc10710eee40f" Apr 13 19:28:12.876139 containerd[1620]: 2026-04-13 19:28:12.646 [INFO][3883] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="a0ee31723bd72bce2343f0718473064c9fd74311836dbc44ae2bc10710eee40f" Apr 13 19:28:12.876139 containerd[1620]: 2026-04-13 19:28:12.825 [INFO][3991] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="a0ee31723bd72bce2343f0718473064c9fd74311836dbc44ae2bc10710eee40f" HandleID="k8s-pod-network.a0ee31723bd72bce2343f0718473064c9fd74311836dbc44ae2bc10710eee40f" Workload="ci--4081--3--7--8--01d4258341-k8s-calico--kube--controllers--669bfd5bbc--9897g-eth0" Apr 13 19:28:12.876139 containerd[1620]: 2026-04-13 19:28:12.826 [INFO][3991] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 19:28:12.876139 containerd[1620]: 2026-04-13 19:28:12.826 [INFO][3991] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 19:28:12.876139 containerd[1620]: 2026-04-13 19:28:12.853 [WARNING][3991] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="a0ee31723bd72bce2343f0718473064c9fd74311836dbc44ae2bc10710eee40f" HandleID="k8s-pod-network.a0ee31723bd72bce2343f0718473064c9fd74311836dbc44ae2bc10710eee40f" Workload="ci--4081--3--7--8--01d4258341-k8s-calico--kube--controllers--669bfd5bbc--9897g-eth0" Apr 13 19:28:12.876139 containerd[1620]: 2026-04-13 19:28:12.854 [INFO][3991] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="a0ee31723bd72bce2343f0718473064c9fd74311836dbc44ae2bc10710eee40f" HandleID="k8s-pod-network.a0ee31723bd72bce2343f0718473064c9fd74311836dbc44ae2bc10710eee40f" Workload="ci--4081--3--7--8--01d4258341-k8s-calico--kube--controllers--669bfd5bbc--9897g-eth0" Apr 13 19:28:12.876139 containerd[1620]: 2026-04-13 19:28:12.856 [INFO][3991] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 19:28:12.876139 containerd[1620]: 2026-04-13 19:28:12.864 [INFO][3883] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="a0ee31723bd72bce2343f0718473064c9fd74311836dbc44ae2bc10710eee40f" Apr 13 19:28:12.878515 containerd[1620]: time="2026-04-13T19:28:12.877076988Z" level=info msg="TearDown network for sandbox \"a0ee31723bd72bce2343f0718473064c9fd74311836dbc44ae2bc10710eee40f\" successfully" Apr 13 19:28:12.878515 containerd[1620]: time="2026-04-13T19:28:12.877108592Z" level=info msg="StopPodSandbox for \"a0ee31723bd72bce2343f0718473064c9fd74311836dbc44ae2bc10710eee40f\" returns successfully" Apr 13 19:28:12.878515 containerd[1620]: time="2026-04-13T19:28:12.878413848Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-669bfd5bbc-9897g,Uid:8dfd1508-5560-4545-9951-14512e76963d,Namespace:calico-system,Attempt:1,}" Apr 13 19:28:12.890881 containerd[1620]: 2026-04-13 19:28:12.606 [INFO][3840] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="dfa1ba7410df325081917add3ceee636b99e0e631662dae097b86cd8911e83d1" Apr 13 19:28:12.890881 containerd[1620]: 2026-04-13 19:28:12.607 [INFO][3840] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="dfa1ba7410df325081917add3ceee636b99e0e631662dae097b86cd8911e83d1" iface="eth0" netns="/var/run/netns/cni-3f0a59a4-a3d0-6a3e-5dd0-71c3ffff2611" Apr 13 19:28:12.890881 containerd[1620]: 2026-04-13 19:28:12.608 [INFO][3840] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="dfa1ba7410df325081917add3ceee636b99e0e631662dae097b86cd8911e83d1" iface="eth0" netns="/var/run/netns/cni-3f0a59a4-a3d0-6a3e-5dd0-71c3ffff2611" Apr 13 19:28:12.890881 containerd[1620]: 2026-04-13 19:28:12.608 [INFO][3840] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="dfa1ba7410df325081917add3ceee636b99e0e631662dae097b86cd8911e83d1" iface="eth0" netns="/var/run/netns/cni-3f0a59a4-a3d0-6a3e-5dd0-71c3ffff2611" Apr 13 19:28:12.890881 containerd[1620]: 2026-04-13 19:28:12.608 [INFO][3840] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="dfa1ba7410df325081917add3ceee636b99e0e631662dae097b86cd8911e83d1" Apr 13 19:28:12.890881 containerd[1620]: 2026-04-13 19:28:12.608 [INFO][3840] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="dfa1ba7410df325081917add3ceee636b99e0e631662dae097b86cd8911e83d1" Apr 13 19:28:12.890881 containerd[1620]: 2026-04-13 19:28:12.858 [INFO][3979] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="dfa1ba7410df325081917add3ceee636b99e0e631662dae097b86cd8911e83d1" HandleID="k8s-pod-network.dfa1ba7410df325081917add3ceee636b99e0e631662dae097b86cd8911e83d1" Workload="ci--4081--3--7--8--01d4258341-k8s-goldmane--5b85766d88--7vqgk-eth0" Apr 13 19:28:12.890881 containerd[1620]: 2026-04-13 19:28:12.858 [INFO][3979] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 19:28:12.890881 containerd[1620]: 2026-04-13 19:28:12.858 [INFO][3979] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 19:28:12.890881 containerd[1620]: 2026-04-13 19:28:12.879 [WARNING][3979] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="dfa1ba7410df325081917add3ceee636b99e0e631662dae097b86cd8911e83d1" HandleID="k8s-pod-network.dfa1ba7410df325081917add3ceee636b99e0e631662dae097b86cd8911e83d1" Workload="ci--4081--3--7--8--01d4258341-k8s-goldmane--5b85766d88--7vqgk-eth0" Apr 13 19:28:12.890881 containerd[1620]: 2026-04-13 19:28:12.879 [INFO][3979] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="dfa1ba7410df325081917add3ceee636b99e0e631662dae097b86cd8911e83d1" HandleID="k8s-pod-network.dfa1ba7410df325081917add3ceee636b99e0e631662dae097b86cd8911e83d1" Workload="ci--4081--3--7--8--01d4258341-k8s-goldmane--5b85766d88--7vqgk-eth0" Apr 13 19:28:12.890881 containerd[1620]: 2026-04-13 19:28:12.881 [INFO][3979] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 19:28:12.890881 containerd[1620]: 2026-04-13 19:28:12.885 [INFO][3840] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="dfa1ba7410df325081917add3ceee636b99e0e631662dae097b86cd8911e83d1" Apr 13 19:28:12.892127 containerd[1620]: time="2026-04-13T19:28:12.891500736Z" level=info msg="TearDown network for sandbox \"dfa1ba7410df325081917add3ceee636b99e0e631662dae097b86cd8911e83d1\" successfully" Apr 13 19:28:12.892127 containerd[1620]: time="2026-04-13T19:28:12.891529139Z" level=info msg="StopPodSandbox for \"dfa1ba7410df325081917add3ceee636b99e0e631662dae097b86cd8911e83d1\" returns successfully" Apr 13 19:28:12.893957 containerd[1620]: time="2026-04-13T19:28:12.893110464Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-7vqgk,Uid:8159562d-f09d-4214-b84d-626c5db8278b,Namespace:calico-system,Attempt:1,}" Apr 13 19:28:12.909346 containerd[1620]: 2026-04-13 19:28:12.607 [INFO][3869] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="7ebe2b83a4be97f70757917fbe7fe38dc310050797dd267576b135397b02ab5e" Apr 13 19:28:12.909346 containerd[1620]: 2026-04-13 19:28:12.607 [INFO][3869] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="7ebe2b83a4be97f70757917fbe7fe38dc310050797dd267576b135397b02ab5e" iface="eth0" netns="/var/run/netns/cni-f36cbbe5-a43e-4b19-2950-2488ab9ed63b" Apr 13 19:28:12.909346 containerd[1620]: 2026-04-13 19:28:12.607 [INFO][3869] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="7ebe2b83a4be97f70757917fbe7fe38dc310050797dd267576b135397b02ab5e" iface="eth0" netns="/var/run/netns/cni-f36cbbe5-a43e-4b19-2950-2488ab9ed63b" Apr 13 19:28:12.909346 containerd[1620]: 2026-04-13 19:28:12.609 [INFO][3869] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="7ebe2b83a4be97f70757917fbe7fe38dc310050797dd267576b135397b02ab5e" iface="eth0" netns="/var/run/netns/cni-f36cbbe5-a43e-4b19-2950-2488ab9ed63b" Apr 13 19:28:12.909346 containerd[1620]: 2026-04-13 19:28:12.609 [INFO][3869] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="7ebe2b83a4be97f70757917fbe7fe38dc310050797dd267576b135397b02ab5e" Apr 13 19:28:12.909346 containerd[1620]: 2026-04-13 19:28:12.609 [INFO][3869] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="7ebe2b83a4be97f70757917fbe7fe38dc310050797dd267576b135397b02ab5e" Apr 13 19:28:12.909346 containerd[1620]: 2026-04-13 19:28:12.834 [INFO][3978] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="7ebe2b83a4be97f70757917fbe7fe38dc310050797dd267576b135397b02ab5e" HandleID="k8s-pod-network.7ebe2b83a4be97f70757917fbe7fe38dc310050797dd267576b135397b02ab5e" Workload="ci--4081--3--7--8--01d4258341-k8s-calico--apiserver--7885d7c4d8--kjddq-eth0" Apr 13 19:28:12.909346 containerd[1620]: 2026-04-13 19:28:12.834 [INFO][3978] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 19:28:12.909346 containerd[1620]: 2026-04-13 19:28:12.882 [INFO][3978] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 19:28:12.909346 containerd[1620]: 2026-04-13 19:28:12.894 [WARNING][3978] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="7ebe2b83a4be97f70757917fbe7fe38dc310050797dd267576b135397b02ab5e" HandleID="k8s-pod-network.7ebe2b83a4be97f70757917fbe7fe38dc310050797dd267576b135397b02ab5e" Workload="ci--4081--3--7--8--01d4258341-k8s-calico--apiserver--7885d7c4d8--kjddq-eth0" Apr 13 19:28:12.909346 containerd[1620]: 2026-04-13 19:28:12.894 [INFO][3978] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="7ebe2b83a4be97f70757917fbe7fe38dc310050797dd267576b135397b02ab5e" HandleID="k8s-pod-network.7ebe2b83a4be97f70757917fbe7fe38dc310050797dd267576b135397b02ab5e" Workload="ci--4081--3--7--8--01d4258341-k8s-calico--apiserver--7885d7c4d8--kjddq-eth0" Apr 13 19:28:12.909346 containerd[1620]: 2026-04-13 19:28:12.896 [INFO][3978] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 19:28:12.909346 containerd[1620]: 2026-04-13 19:28:12.905 [INFO][3869] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="7ebe2b83a4be97f70757917fbe7fe38dc310050797dd267576b135397b02ab5e" Apr 13 19:28:12.910214 containerd[1620]: time="2026-04-13T19:28:12.909488736Z" level=info msg="TearDown network for sandbox \"7ebe2b83a4be97f70757917fbe7fe38dc310050797dd267576b135397b02ab5e\" successfully" Apr 13 19:28:12.910214 containerd[1620]: time="2026-04-13T19:28:12.909513978Z" level=info msg="StopPodSandbox for \"7ebe2b83a4be97f70757917fbe7fe38dc310050797dd267576b135397b02ab5e\" returns successfully" Apr 13 19:28:12.912328 containerd[1620]: time="2026-04-13T19:28:12.912278267Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7885d7c4d8-kjddq,Uid:5241b13b-999d-4851-9b4e-47561517f354,Namespace:calico-system,Attempt:1,}" Apr 13 19:28:12.930274 containerd[1620]: 2026-04-13 19:28:12.695 [INFO][3935] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="eeadffa11b4c751f5e76c23a1a02279932fff7ab4b1d30bae52ca921272ab763" Apr 13 19:28:12.930274 containerd[1620]: 2026-04-13 19:28:12.696 [INFO][3935] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="eeadffa11b4c751f5e76c23a1a02279932fff7ab4b1d30bae52ca921272ab763" iface="eth0" netns="/var/run/netns/cni-c87c58ee-ea22-b9e6-6f52-3b877acbc55b" Apr 13 19:28:12.930274 containerd[1620]: 2026-04-13 19:28:12.696 [INFO][3935] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="eeadffa11b4c751f5e76c23a1a02279932fff7ab4b1d30bae52ca921272ab763" iface="eth0" netns="/var/run/netns/cni-c87c58ee-ea22-b9e6-6f52-3b877acbc55b" Apr 13 19:28:12.930274 containerd[1620]: 2026-04-13 19:28:12.696 [INFO][3935] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="eeadffa11b4c751f5e76c23a1a02279932fff7ab4b1d30bae52ca921272ab763" iface="eth0" netns="/var/run/netns/cni-c87c58ee-ea22-b9e6-6f52-3b877acbc55b" Apr 13 19:28:12.930274 containerd[1620]: 2026-04-13 19:28:12.696 [INFO][3935] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="eeadffa11b4c751f5e76c23a1a02279932fff7ab4b1d30bae52ca921272ab763" Apr 13 19:28:12.930274 containerd[1620]: 2026-04-13 19:28:12.696 [INFO][3935] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="eeadffa11b4c751f5e76c23a1a02279932fff7ab4b1d30bae52ca921272ab763" Apr 13 19:28:12.930274 containerd[1620]: 2026-04-13 19:28:12.839 [INFO][4011] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="eeadffa11b4c751f5e76c23a1a02279932fff7ab4b1d30bae52ca921272ab763" HandleID="k8s-pod-network.eeadffa11b4c751f5e76c23a1a02279932fff7ab4b1d30bae52ca921272ab763" Workload="ci--4081--3--7--8--01d4258341-k8s-coredns--674b8bbfcf--7z59s-eth0" Apr 13 19:28:12.930274 containerd[1620]: 2026-04-13 19:28:12.839 [INFO][4011] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 19:28:12.930274 containerd[1620]: 2026-04-13 19:28:12.896 [INFO][4011] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 19:28:12.930274 containerd[1620]: 2026-04-13 19:28:12.913 [WARNING][4011] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="eeadffa11b4c751f5e76c23a1a02279932fff7ab4b1d30bae52ca921272ab763" HandleID="k8s-pod-network.eeadffa11b4c751f5e76c23a1a02279932fff7ab4b1d30bae52ca921272ab763" Workload="ci--4081--3--7--8--01d4258341-k8s-coredns--674b8bbfcf--7z59s-eth0" Apr 13 19:28:12.930274 containerd[1620]: 2026-04-13 19:28:12.913 [INFO][4011] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="eeadffa11b4c751f5e76c23a1a02279932fff7ab4b1d30bae52ca921272ab763" HandleID="k8s-pod-network.eeadffa11b4c751f5e76c23a1a02279932fff7ab4b1d30bae52ca921272ab763" Workload="ci--4081--3--7--8--01d4258341-k8s-coredns--674b8bbfcf--7z59s-eth0" Apr 13 19:28:12.930274 containerd[1620]: 2026-04-13 19:28:12.918 [INFO][4011] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 19:28:12.930274 containerd[1620]: 2026-04-13 19:28:12.927 [INFO][3935] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="eeadffa11b4c751f5e76c23a1a02279932fff7ab4b1d30bae52ca921272ab763" Apr 13 19:28:12.930274 containerd[1620]: time="2026-04-13T19:28:12.930222343Z" level=info msg="TearDown network for sandbox \"eeadffa11b4c751f5e76c23a1a02279932fff7ab4b1d30bae52ca921272ab763\" successfully" Apr 13 19:28:12.930274 containerd[1620]: time="2026-04-13T19:28:12.930248625Z" level=info msg="StopPodSandbox for \"eeadffa11b4c751f5e76c23a1a02279932fff7ab4b1d30bae52ca921272ab763\" returns successfully" Apr 13 19:28:12.932500 containerd[1620]: time="2026-04-13T19:28:12.932351365Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-7z59s,Uid:dd51994b-5cb8-4cf8-81b2-1ad58037a2fe,Namespace:kube-system,Attempt:1,}" Apr 13 19:28:12.957701 containerd[1620]: 2026-04-13 19:28:12.643 [INFO][3934] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="1c6f805bbb98b1f7adb756761a43fc0f7ad765c6738e53a622dde51ed3ff6777" Apr 13 19:28:12.957701 containerd[1620]: 2026-04-13 19:28:12.643 [INFO][3934] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="1c6f805bbb98b1f7adb756761a43fc0f7ad765c6738e53a622dde51ed3ff6777" iface="eth0" netns="/var/run/netns/cni-a8294833-fd66-7868-7945-0a1325ea24ae" Apr 13 19:28:12.957701 containerd[1620]: 2026-04-13 19:28:12.645 [INFO][3934] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="1c6f805bbb98b1f7adb756761a43fc0f7ad765c6738e53a622dde51ed3ff6777" iface="eth0" netns="/var/run/netns/cni-a8294833-fd66-7868-7945-0a1325ea24ae" Apr 13 19:28:12.957701 containerd[1620]: 2026-04-13 19:28:12.646 [INFO][3934] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="1c6f805bbb98b1f7adb756761a43fc0f7ad765c6738e53a622dde51ed3ff6777" iface="eth0" netns="/var/run/netns/cni-a8294833-fd66-7868-7945-0a1325ea24ae" Apr 13 19:28:12.957701 containerd[1620]: 2026-04-13 19:28:12.646 [INFO][3934] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="1c6f805bbb98b1f7adb756761a43fc0f7ad765c6738e53a622dde51ed3ff6777" Apr 13 19:28:12.957701 containerd[1620]: 2026-04-13 19:28:12.646 [INFO][3934] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="1c6f805bbb98b1f7adb756761a43fc0f7ad765c6738e53a622dde51ed3ff6777" Apr 13 19:28:12.957701 containerd[1620]: 2026-04-13 19:28:12.863 [INFO][3993] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="1c6f805bbb98b1f7adb756761a43fc0f7ad765c6738e53a622dde51ed3ff6777" HandleID="k8s-pod-network.1c6f805bbb98b1f7adb756761a43fc0f7ad765c6738e53a622dde51ed3ff6777" Workload="ci--4081--3--7--8--01d4258341-k8s-whisker--74b58fcbcd--6wksg-eth0" Apr 13 19:28:12.957701 containerd[1620]: 2026-04-13 19:28:12.863 [INFO][3993] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 19:28:12.957701 containerd[1620]: 2026-04-13 19:28:12.918 [INFO][3993] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 19:28:12.957701 containerd[1620]: 2026-04-13 19:28:12.940 [WARNING][3993] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="1c6f805bbb98b1f7adb756761a43fc0f7ad765c6738e53a622dde51ed3ff6777" HandleID="k8s-pod-network.1c6f805bbb98b1f7adb756761a43fc0f7ad765c6738e53a622dde51ed3ff6777" Workload="ci--4081--3--7--8--01d4258341-k8s-whisker--74b58fcbcd--6wksg-eth0" Apr 13 19:28:12.957701 containerd[1620]: 2026-04-13 19:28:12.940 [INFO][3993] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="1c6f805bbb98b1f7adb756761a43fc0f7ad765c6738e53a622dde51ed3ff6777" HandleID="k8s-pod-network.1c6f805bbb98b1f7adb756761a43fc0f7ad765c6738e53a622dde51ed3ff6777" Workload="ci--4081--3--7--8--01d4258341-k8s-whisker--74b58fcbcd--6wksg-eth0" Apr 13 19:28:12.957701 containerd[1620]: 2026-04-13 19:28:12.942 [INFO][3993] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 19:28:12.957701 containerd[1620]: 2026-04-13 19:28:12.950 [INFO][3934] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="1c6f805bbb98b1f7adb756761a43fc0f7ad765c6738e53a622dde51ed3ff6777" Apr 13 19:28:12.960776 containerd[1620]: time="2026-04-13T19:28:12.960600677Z" level=info msg="TearDown network for sandbox \"1c6f805bbb98b1f7adb756761a43fc0f7ad765c6738e53a622dde51ed3ff6777\" successfully" Apr 13 19:28:12.960776 containerd[1620]: time="2026-04-13T19:28:12.960700568Z" level=info msg="StopPodSandbox for \"1c6f805bbb98b1f7adb756761a43fc0f7ad765c6738e53a622dde51ed3ff6777\" returns successfully" Apr 13 19:28:12.988559 containerd[1620]: 2026-04-13 19:28:12.603 [INFO][3884] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="f0ca69d536eeedfb58c75ba117034fcd40a197cb7274707616e93c22b87fdae7" Apr 13 19:28:12.988559 containerd[1620]: 2026-04-13 19:28:12.604 [INFO][3884] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="f0ca69d536eeedfb58c75ba117034fcd40a197cb7274707616e93c22b87fdae7" iface="eth0" netns="/var/run/netns/cni-e73b46ea-d725-2c72-fe2b-87aa921c7d3a" Apr 13 19:28:12.988559 containerd[1620]: 2026-04-13 19:28:12.604 [INFO][3884] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="f0ca69d536eeedfb58c75ba117034fcd40a197cb7274707616e93c22b87fdae7" iface="eth0" netns="/var/run/netns/cni-e73b46ea-d725-2c72-fe2b-87aa921c7d3a" Apr 13 19:28:12.988559 containerd[1620]: 2026-04-13 19:28:12.605 [INFO][3884] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="f0ca69d536eeedfb58c75ba117034fcd40a197cb7274707616e93c22b87fdae7" iface="eth0" netns="/var/run/netns/cni-e73b46ea-d725-2c72-fe2b-87aa921c7d3a" Apr 13 19:28:12.988559 containerd[1620]: 2026-04-13 19:28:12.605 [INFO][3884] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="f0ca69d536eeedfb58c75ba117034fcd40a197cb7274707616e93c22b87fdae7" Apr 13 19:28:12.988559 containerd[1620]: 2026-04-13 19:28:12.605 [INFO][3884] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="f0ca69d536eeedfb58c75ba117034fcd40a197cb7274707616e93c22b87fdae7" Apr 13 19:28:12.988559 containerd[1620]: 2026-04-13 19:28:12.850 [INFO][3971] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="f0ca69d536eeedfb58c75ba117034fcd40a197cb7274707616e93c22b87fdae7" HandleID="k8s-pod-network.f0ca69d536eeedfb58c75ba117034fcd40a197cb7274707616e93c22b87fdae7" Workload="ci--4081--3--7--8--01d4258341-k8s-calico--apiserver--7885d7c4d8--s4bvw-eth0" Apr 13 19:28:12.988559 containerd[1620]: 2026-04-13 19:28:12.850 [INFO][3971] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 19:28:12.988559 containerd[1620]: 2026-04-13 19:28:12.942 [INFO][3971] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 19:28:12.988559 containerd[1620]: 2026-04-13 19:28:12.966 [WARNING][3971] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="f0ca69d536eeedfb58c75ba117034fcd40a197cb7274707616e93c22b87fdae7" HandleID="k8s-pod-network.f0ca69d536eeedfb58c75ba117034fcd40a197cb7274707616e93c22b87fdae7" Workload="ci--4081--3--7--8--01d4258341-k8s-calico--apiserver--7885d7c4d8--s4bvw-eth0" Apr 13 19:28:12.988559 containerd[1620]: 2026-04-13 19:28:12.966 [INFO][3971] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="f0ca69d536eeedfb58c75ba117034fcd40a197cb7274707616e93c22b87fdae7" HandleID="k8s-pod-network.f0ca69d536eeedfb58c75ba117034fcd40a197cb7274707616e93c22b87fdae7" Workload="ci--4081--3--7--8--01d4258341-k8s-calico--apiserver--7885d7c4d8--s4bvw-eth0" Apr 13 19:28:12.988559 containerd[1620]: 2026-04-13 19:28:12.968 [INFO][3971] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 19:28:12.988559 containerd[1620]: 2026-04-13 19:28:12.980 [INFO][3884] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="f0ca69d536eeedfb58c75ba117034fcd40a197cb7274707616e93c22b87fdae7" Apr 13 19:28:12.988969 containerd[1620]: time="2026-04-13T19:28:12.988728057Z" level=info msg="TearDown network for sandbox \"f0ca69d536eeedfb58c75ba117034fcd40a197cb7274707616e93c22b87fdae7\" successfully" Apr 13 19:28:12.988969 containerd[1620]: time="2026-04-13T19:28:12.988754740Z" level=info msg="StopPodSandbox for \"f0ca69d536eeedfb58c75ba117034fcd40a197cb7274707616e93c22b87fdae7\" returns successfully" Apr 13 19:28:12.990083 containerd[1620]: time="2026-04-13T19:28:12.989717040Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7885d7c4d8-s4bvw,Uid:5e2158f3-f3dc-4de6-bbab-4b66f3e43bd9,Namespace:calico-system,Attempt:1,}" Apr 13 19:28:13.051057 containerd[1620]: 2026-04-13 19:28:12.651 [INFO][3917] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="92e1b722d97e40a63aea6ba384e730458e3267b6d027fb8f09b706122c5d0c7b" Apr 13 19:28:13.051057 containerd[1620]: 2026-04-13 19:28:12.655 [INFO][3917] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="92e1b722d97e40a63aea6ba384e730458e3267b6d027fb8f09b706122c5d0c7b" iface="eth0" netns="/var/run/netns/cni-09ecb12e-83cd-b149-85f3-6d6a1cf0aeab" Apr 13 19:28:13.051057 containerd[1620]: 2026-04-13 19:28:12.658 [INFO][3917] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="92e1b722d97e40a63aea6ba384e730458e3267b6d027fb8f09b706122c5d0c7b" iface="eth0" netns="/var/run/netns/cni-09ecb12e-83cd-b149-85f3-6d6a1cf0aeab" Apr 13 19:28:13.051057 containerd[1620]: 2026-04-13 19:28:12.660 [INFO][3917] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="92e1b722d97e40a63aea6ba384e730458e3267b6d027fb8f09b706122c5d0c7b" iface="eth0" netns="/var/run/netns/cni-09ecb12e-83cd-b149-85f3-6d6a1cf0aeab" Apr 13 19:28:13.051057 containerd[1620]: 2026-04-13 19:28:12.660 [INFO][3917] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="92e1b722d97e40a63aea6ba384e730458e3267b6d027fb8f09b706122c5d0c7b" Apr 13 19:28:13.051057 containerd[1620]: 2026-04-13 19:28:12.660 [INFO][3917] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="92e1b722d97e40a63aea6ba384e730458e3267b6d027fb8f09b706122c5d0c7b" Apr 13 19:28:13.051057 containerd[1620]: 2026-04-13 19:28:12.866 [INFO][4004] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="92e1b722d97e40a63aea6ba384e730458e3267b6d027fb8f09b706122c5d0c7b" HandleID="k8s-pod-network.92e1b722d97e40a63aea6ba384e730458e3267b6d027fb8f09b706122c5d0c7b" Workload="ci--4081--3--7--8--01d4258341-k8s-csi--node--driver--xdt82-eth0" Apr 13 19:28:13.051057 containerd[1620]: 2026-04-13 19:28:12.867 [INFO][4004] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 19:28:13.051057 containerd[1620]: 2026-04-13 19:28:12.969 [INFO][4004] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 19:28:13.051057 containerd[1620]: 2026-04-13 19:28:12.997 [WARNING][4004] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="92e1b722d97e40a63aea6ba384e730458e3267b6d027fb8f09b706122c5d0c7b" HandleID="k8s-pod-network.92e1b722d97e40a63aea6ba384e730458e3267b6d027fb8f09b706122c5d0c7b" Workload="ci--4081--3--7--8--01d4258341-k8s-csi--node--driver--xdt82-eth0" Apr 13 19:28:13.051057 containerd[1620]: 2026-04-13 19:28:12.997 [INFO][4004] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="92e1b722d97e40a63aea6ba384e730458e3267b6d027fb8f09b706122c5d0c7b" HandleID="k8s-pod-network.92e1b722d97e40a63aea6ba384e730458e3267b6d027fb8f09b706122c5d0c7b" Workload="ci--4081--3--7--8--01d4258341-k8s-csi--node--driver--xdt82-eth0" Apr 13 19:28:13.051057 containerd[1620]: 2026-04-13 19:28:12.999 [INFO][4004] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 19:28:13.051057 containerd[1620]: 2026-04-13 19:28:13.029 [INFO][3917] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="92e1b722d97e40a63aea6ba384e730458e3267b6d027fb8f09b706122c5d0c7b" Apr 13 19:28:13.051057 containerd[1620]: time="2026-04-13T19:28:13.050289363Z" level=info msg="TearDown network for sandbox \"92e1b722d97e40a63aea6ba384e730458e3267b6d027fb8f09b706122c5d0c7b\" successfully" Apr 13 19:28:13.051057 containerd[1620]: time="2026-04-13T19:28:13.050318966Z" level=info msg="StopPodSandbox for \"92e1b722d97e40a63aea6ba384e730458e3267b6d027fb8f09b706122c5d0c7b\" returns successfully" Apr 13 19:28:13.054022 containerd[1620]: time="2026-04-13T19:28:13.053335151Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xdt82,Uid:cb288a13-6f3a-4bf1-9058-ef9f893933ae,Namespace:calico-system,Attempt:1,}" Apr 13 19:28:13.053463 systemd[1]: run-netns-cni\x2de73b46ea\x2dd725\x2d2c72\x2dfe2b\x2d87aa921c7d3a.mount: Deactivated successfully. Apr 13 19:28:13.053597 systemd[1]: run-netns-cni\x2d9188b4a0\x2df72a\x2d7bea\x2d696b\x2d5fb5c0c9169f.mount: Deactivated successfully. Apr 13 19:28:13.053676 systemd[1]: run-netns-cni\x2d3f0a59a4\x2da3d0\x2d6a3e\x2d5dd0\x2d71c3ffff2611.mount: Deactivated successfully. Apr 13 19:28:13.053748 systemd[1]: run-netns-cni\x2df36cbbe5\x2da43e\x2d4b19\x2d2950\x2d2488ab9ed63b.mount: Deactivated successfully. Apr 13 19:28:13.053816 systemd[1]: run-netns-cni\x2da8294833\x2dfd66\x2d7868\x2d7945\x2d0a1325ea24ae.mount: Deactivated successfully. Apr 13 19:28:13.053884 systemd[1]: run-netns-cni\x2dc87c58ee\x2dea22\x2db9e6\x2d6f52\x2d3b877acbc55b.mount: Deactivated successfully. Apr 13 19:28:13.067201 systemd[1]: run-netns-cni\x2d09ecb12e\x2d83cd\x2db149\x2d85f3\x2d6d6a1cf0aeab.mount: Deactivated successfully. Apr 13 19:28:13.074047 kubelet[2698]: I0413 19:28:13.072149 2698 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/060246f3-68f3-47a5-b10e-f5d0dbb7a159-nginx-config\") pod \"060246f3-68f3-47a5-b10e-f5d0dbb7a159\" (UID: \"060246f3-68f3-47a5-b10e-f5d0dbb7a159\") " Apr 13 19:28:13.074047 kubelet[2698]: I0413 19:28:13.072215 2698 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/060246f3-68f3-47a5-b10e-f5d0dbb7a159-whisker-ca-bundle\") pod \"060246f3-68f3-47a5-b10e-f5d0dbb7a159\" (UID: \"060246f3-68f3-47a5-b10e-f5d0dbb7a159\") " Apr 13 19:28:13.074047 kubelet[2698]: I0413 19:28:13.072235 2698 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xqtrh\" (UniqueName: \"kubernetes.io/projected/060246f3-68f3-47a5-b10e-f5d0dbb7a159-kube-api-access-xqtrh\") pod \"060246f3-68f3-47a5-b10e-f5d0dbb7a159\" (UID: \"060246f3-68f3-47a5-b10e-f5d0dbb7a159\") " Apr 13 19:28:13.074047 kubelet[2698]: I0413 19:28:13.072277 2698 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/060246f3-68f3-47a5-b10e-f5d0dbb7a159-whisker-backend-key-pair\") pod \"060246f3-68f3-47a5-b10e-f5d0dbb7a159\" (UID: \"060246f3-68f3-47a5-b10e-f5d0dbb7a159\") " Apr 13 19:28:13.074047 kubelet[2698]: I0413 19:28:13.073546 2698 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/060246f3-68f3-47a5-b10e-f5d0dbb7a159-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "060246f3-68f3-47a5-b10e-f5d0dbb7a159" (UID: "060246f3-68f3-47a5-b10e-f5d0dbb7a159"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 13 19:28:13.074339 kubelet[2698]: I0413 19:28:13.073859 2698 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/060246f3-68f3-47a5-b10e-f5d0dbb7a159-nginx-config" (OuterVolumeSpecName: "nginx-config") pod "060246f3-68f3-47a5-b10e-f5d0dbb7a159" (UID: "060246f3-68f3-47a5-b10e-f5d0dbb7a159"). InnerVolumeSpecName "nginx-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 13 19:28:13.094584 systemd[1]: var-lib-kubelet-pods-060246f3\x2d68f3\x2d47a5\x2db10e\x2df5d0dbb7a159-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dxqtrh.mount: Deactivated successfully. Apr 13 19:28:13.098851 kubelet[2698]: I0413 19:28:13.098732 2698 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/060246f3-68f3-47a5-b10e-f5d0dbb7a159-kube-api-access-xqtrh" (OuterVolumeSpecName: "kube-api-access-xqtrh") pod "060246f3-68f3-47a5-b10e-f5d0dbb7a159" (UID: "060246f3-68f3-47a5-b10e-f5d0dbb7a159"). InnerVolumeSpecName "kube-api-access-xqtrh". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 13 19:28:13.104344 systemd[1]: var-lib-kubelet-pods-060246f3\x2d68f3\x2d47a5\x2db10e\x2df5d0dbb7a159-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Apr 13 19:28:13.138907 kubelet[2698]: I0413 19:28:13.138853 2698 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/060246f3-68f3-47a5-b10e-f5d0dbb7a159-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "060246f3-68f3-47a5-b10e-f5d0dbb7a159" (UID: "060246f3-68f3-47a5-b10e-f5d0dbb7a159"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 13 19:28:13.174563 kubelet[2698]: I0413 19:28:13.174510 2698 reconciler_common.go:299] "Volume detached for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/060246f3-68f3-47a5-b10e-f5d0dbb7a159-nginx-config\") on node \"ci-4081-3-7-8-01d4258341\" DevicePath \"\"" Apr 13 19:28:13.174563 kubelet[2698]: I0413 19:28:13.174550 2698 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/060246f3-68f3-47a5-b10e-f5d0dbb7a159-whisker-ca-bundle\") on node \"ci-4081-3-7-8-01d4258341\" DevicePath \"\"" Apr 13 19:28:13.174563 kubelet[2698]: I0413 19:28:13.174563 2698 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xqtrh\" (UniqueName: \"kubernetes.io/projected/060246f3-68f3-47a5-b10e-f5d0dbb7a159-kube-api-access-xqtrh\") on node \"ci-4081-3-7-8-01d4258341\" DevicePath \"\"" Apr 13 19:28:13.174563 kubelet[2698]: I0413 19:28:13.174578 2698 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/060246f3-68f3-47a5-b10e-f5d0dbb7a159-whisker-backend-key-pair\") on node \"ci-4081-3-7-8-01d4258341\" DevicePath \"\"" Apr 13 19:28:13.427722 systemd-networkd[1251]: cali6e423b9195c: Link UP Apr 13 19:28:13.430604 systemd-networkd[1251]: cali6e423b9195c: Gained carrier Apr 13 19:28:13.536964 containerd[1620]: 2026-04-13 19:28:12.920 [ERROR][4033] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 13 19:28:13.536964 containerd[1620]: 2026-04-13 19:28:12.949 [INFO][4033] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--7--8--01d4258341-k8s-coredns--674b8bbfcf--ft6d4-eth0 coredns-674b8bbfcf- kube-system 781d0594-7e8f-4f56-9e23-737cbe5d8293 877 0 2026-04-13 19:27:43 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-3-7-8-01d4258341 coredns-674b8bbfcf-ft6d4 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali6e423b9195c [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="2039eea45940860c3fa191454219cdcab7fc4cb988de8f19feacc116e507d863" Namespace="kube-system" Pod="coredns-674b8bbfcf-ft6d4" WorkloadEndpoint="ci--4081--3--7--8--01d4258341-k8s-coredns--674b8bbfcf--ft6d4-" Apr 13 19:28:13.536964 containerd[1620]: 2026-04-13 19:28:12.952 [INFO][4033] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2039eea45940860c3fa191454219cdcab7fc4cb988de8f19feacc116e507d863" Namespace="kube-system" Pod="coredns-674b8bbfcf-ft6d4" WorkloadEndpoint="ci--4081--3--7--8--01d4258341-k8s-coredns--674b8bbfcf--ft6d4-eth0" Apr 13 19:28:13.536964 containerd[1620]: 2026-04-13 19:28:13.061 [INFO][4065] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2039eea45940860c3fa191454219cdcab7fc4cb988de8f19feacc116e507d863" HandleID="k8s-pod-network.2039eea45940860c3fa191454219cdcab7fc4cb988de8f19feacc116e507d863" Workload="ci--4081--3--7--8--01d4258341-k8s-coredns--674b8bbfcf--ft6d4-eth0" Apr 13 19:28:13.536964 containerd[1620]: 2026-04-13 19:28:13.131 [INFO][4065] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="2039eea45940860c3fa191454219cdcab7fc4cb988de8f19feacc116e507d863" HandleID="k8s-pod-network.2039eea45940860c3fa191454219cdcab7fc4cb988de8f19feacc116e507d863" Workload="ci--4081--3--7--8--01d4258341-k8s-coredns--674b8bbfcf--ft6d4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000307a10), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-3-7-8-01d4258341", "pod":"coredns-674b8bbfcf-ft6d4", "timestamp":"2026-04-13 19:28:13.061157662 +0000 UTC"}, Hostname:"ci-4081-3-7-8-01d4258341", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40003431e0)} Apr 13 19:28:13.536964 containerd[1620]: 2026-04-13 19:28:13.131 [INFO][4065] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 19:28:13.536964 containerd[1620]: 2026-04-13 19:28:13.135 [INFO][4065] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 19:28:13.536964 containerd[1620]: 2026-04-13 19:28:13.135 [INFO][4065] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-7-8-01d4258341' Apr 13 19:28:13.536964 containerd[1620]: 2026-04-13 19:28:13.139 [INFO][4065] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.2039eea45940860c3fa191454219cdcab7fc4cb988de8f19feacc116e507d863" host="ci-4081-3-7-8-01d4258341" Apr 13 19:28:13.536964 containerd[1620]: 2026-04-13 19:28:13.162 [INFO][4065] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081-3-7-8-01d4258341" Apr 13 19:28:13.536964 containerd[1620]: 2026-04-13 19:28:13.177 [INFO][4065] ipam/ipam.go 526: Trying affinity for 192.168.20.0/26 host="ci-4081-3-7-8-01d4258341" Apr 13 19:28:13.536964 containerd[1620]: 2026-04-13 19:28:13.182 [INFO][4065] ipam/ipam.go 160: Attempting to load block cidr=192.168.20.0/26 host="ci-4081-3-7-8-01d4258341" Apr 13 19:28:13.536964 containerd[1620]: 2026-04-13 19:28:13.190 [INFO][4065] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.20.0/26 host="ci-4081-3-7-8-01d4258341" Apr 13 19:28:13.536964 containerd[1620]: 2026-04-13 19:28:13.190 [INFO][4065] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.20.0/26 handle="k8s-pod-network.2039eea45940860c3fa191454219cdcab7fc4cb988de8f19feacc116e507d863" host="ci-4081-3-7-8-01d4258341" Apr 13 19:28:13.536964 containerd[1620]: 2026-04-13 19:28:13.204 [INFO][4065] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.2039eea45940860c3fa191454219cdcab7fc4cb988de8f19feacc116e507d863 Apr 13 19:28:13.536964 containerd[1620]: 2026-04-13 19:28:13.217 [INFO][4065] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.20.0/26 handle="k8s-pod-network.2039eea45940860c3fa191454219cdcab7fc4cb988de8f19feacc116e507d863" host="ci-4081-3-7-8-01d4258341" Apr 13 19:28:13.536964 containerd[1620]: 2026-04-13 19:28:13.253 [INFO][4065] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.20.1/26] block=192.168.20.0/26 handle="k8s-pod-network.2039eea45940860c3fa191454219cdcab7fc4cb988de8f19feacc116e507d863" host="ci-4081-3-7-8-01d4258341" Apr 13 19:28:13.536964 containerd[1620]: 2026-04-13 19:28:13.254 [INFO][4065] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.20.1/26] handle="k8s-pod-network.2039eea45940860c3fa191454219cdcab7fc4cb988de8f19feacc116e507d863" host="ci-4081-3-7-8-01d4258341" Apr 13 19:28:13.536964 containerd[1620]: 2026-04-13 19:28:13.257 [INFO][4065] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 19:28:13.536964 containerd[1620]: 2026-04-13 19:28:13.258 [INFO][4065] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.20.1/26] IPv6=[] ContainerID="2039eea45940860c3fa191454219cdcab7fc4cb988de8f19feacc116e507d863" HandleID="k8s-pod-network.2039eea45940860c3fa191454219cdcab7fc4cb988de8f19feacc116e507d863" Workload="ci--4081--3--7--8--01d4258341-k8s-coredns--674b8bbfcf--ft6d4-eth0" Apr 13 19:28:13.537890 containerd[1620]: 2026-04-13 19:28:13.300 [INFO][4033] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2039eea45940860c3fa191454219cdcab7fc4cb988de8f19feacc116e507d863" Namespace="kube-system" Pod="coredns-674b8bbfcf-ft6d4" WorkloadEndpoint="ci--4081--3--7--8--01d4258341-k8s-coredns--674b8bbfcf--ft6d4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--8--01d4258341-k8s-coredns--674b8bbfcf--ft6d4-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"781d0594-7e8f-4f56-9e23-737cbe5d8293", ResourceVersion:"877", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 19, 27, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-8-01d4258341", ContainerID:"", Pod:"coredns-674b8bbfcf-ft6d4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.20.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6e423b9195c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 19:28:13.537890 containerd[1620]: 2026-04-13 19:28:13.318 [INFO][4033] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.20.1/32] ContainerID="2039eea45940860c3fa191454219cdcab7fc4cb988de8f19feacc116e507d863" Namespace="kube-system" Pod="coredns-674b8bbfcf-ft6d4" WorkloadEndpoint="ci--4081--3--7--8--01d4258341-k8s-coredns--674b8bbfcf--ft6d4-eth0" Apr 13 19:28:13.537890 containerd[1620]: 2026-04-13 19:28:13.318 [INFO][4033] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6e423b9195c ContainerID="2039eea45940860c3fa191454219cdcab7fc4cb988de8f19feacc116e507d863" Namespace="kube-system" Pod="coredns-674b8bbfcf-ft6d4" WorkloadEndpoint="ci--4081--3--7--8--01d4258341-k8s-coredns--674b8bbfcf--ft6d4-eth0" Apr 13 19:28:13.537890 containerd[1620]: 2026-04-13 19:28:13.456 [INFO][4033] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2039eea45940860c3fa191454219cdcab7fc4cb988de8f19feacc116e507d863" Namespace="kube-system" Pod="coredns-674b8bbfcf-ft6d4" WorkloadEndpoint="ci--4081--3--7--8--01d4258341-k8s-coredns--674b8bbfcf--ft6d4-eth0" Apr 13 19:28:13.537890 containerd[1620]: 2026-04-13 19:28:13.472 [INFO][4033] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2039eea45940860c3fa191454219cdcab7fc4cb988de8f19feacc116e507d863" Namespace="kube-system" Pod="coredns-674b8bbfcf-ft6d4" WorkloadEndpoint="ci--4081--3--7--8--01d4258341-k8s-coredns--674b8bbfcf--ft6d4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--8--01d4258341-k8s-coredns--674b8bbfcf--ft6d4-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"781d0594-7e8f-4f56-9e23-737cbe5d8293", ResourceVersion:"877", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 19, 27, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-8-01d4258341", ContainerID:"2039eea45940860c3fa191454219cdcab7fc4cb988de8f19feacc116e507d863", Pod:"coredns-674b8bbfcf-ft6d4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.20.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6e423b9195c", MAC:"ca:9c:dc:e5:95:a5", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 19:28:13.537890 containerd[1620]: 2026-04-13 19:28:13.493 [INFO][4033] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2039eea45940860c3fa191454219cdcab7fc4cb988de8f19feacc116e507d863" Namespace="kube-system" Pod="coredns-674b8bbfcf-ft6d4" WorkloadEndpoint="ci--4081--3--7--8--01d4258341-k8s-coredns--674b8bbfcf--ft6d4-eth0" Apr 13 19:28:13.686026 kubelet[2698]: I0413 19:28:13.685522 2698 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/083a7fe5-365a-46b9-ad91-6da2ea0f91ea-whisker-ca-bundle\") pod \"whisker-676675864c-cv674\" (UID: \"083a7fe5-365a-46b9-ad91-6da2ea0f91ea\") " pod="calico-system/whisker-676675864c-cv674" Apr 13 19:28:13.686026 kubelet[2698]: I0413 19:28:13.685585 2698 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k8srs\" (UniqueName: \"kubernetes.io/projected/083a7fe5-365a-46b9-ad91-6da2ea0f91ea-kube-api-access-k8srs\") pod \"whisker-676675864c-cv674\" (UID: \"083a7fe5-365a-46b9-ad91-6da2ea0f91ea\") " pod="calico-system/whisker-676675864c-cv674" Apr 13 19:28:13.686026 kubelet[2698]: I0413 19:28:13.685605 2698 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/083a7fe5-365a-46b9-ad91-6da2ea0f91ea-nginx-config\") pod \"whisker-676675864c-cv674\" (UID: \"083a7fe5-365a-46b9-ad91-6da2ea0f91ea\") " pod="calico-system/whisker-676675864c-cv674" Apr 13 19:28:13.686026 kubelet[2698]: I0413 19:28:13.685621 2698 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/083a7fe5-365a-46b9-ad91-6da2ea0f91ea-whisker-backend-key-pair\") pod \"whisker-676675864c-cv674\" (UID: \"083a7fe5-365a-46b9-ad91-6da2ea0f91ea\") " pod="calico-system/whisker-676675864c-cv674" Apr 13 19:28:13.890405 containerd[1620]: time="2026-04-13T19:28:13.890216213Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-676675864c-cv674,Uid:083a7fe5-365a-46b9-ad91-6da2ea0f91ea,Namespace:calico-system,Attempt:0,}" Apr 13 19:28:13.903950 containerd[1620]: time="2026-04-13T19:28:13.902870533Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 19:28:13.903950 containerd[1620]: time="2026-04-13T19:28:13.902964302Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 19:28:13.903950 containerd[1620]: time="2026-04-13T19:28:13.902981304Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:28:13.903950 containerd[1620]: time="2026-04-13T19:28:13.903109357Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:28:13.981403 systemd-networkd[1251]: cali4b28647f000: Link UP Apr 13 19:28:13.990745 systemd-networkd[1251]: cali4b28647f000: Gained carrier Apr 13 19:28:14.042996 containerd[1620]: 2026-04-13 19:28:13.113 [ERROR][4103] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 13 19:28:14.042996 containerd[1620]: 2026-04-13 19:28:13.154 [INFO][4103] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--7--8--01d4258341-k8s-calico--apiserver--7885d7c4d8--s4bvw-eth0 calico-apiserver-7885d7c4d8- calico-system 5e2158f3-f3dc-4de6-bbab-4b66f3e43bd9 873 0 2026-04-13 19:27:56 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7885d7c4d8 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-7-8-01d4258341 calico-apiserver-7885d7c4d8-s4bvw eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali4b28647f000 [] [] }} ContainerID="cadde1ec1ceb3caf5ad550f8c515eedf7597dcfa16c10f050b7c4f1359fe8c8e" Namespace="calico-system" Pod="calico-apiserver-7885d7c4d8-s4bvw" WorkloadEndpoint="ci--4081--3--7--8--01d4258341-k8s-calico--apiserver--7885d7c4d8--s4bvw-" Apr 13 19:28:14.042996 containerd[1620]: 2026-04-13 19:28:13.154 [INFO][4103] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="cadde1ec1ceb3caf5ad550f8c515eedf7597dcfa16c10f050b7c4f1359fe8c8e" Namespace="calico-system" Pod="calico-apiserver-7885d7c4d8-s4bvw" WorkloadEndpoint="ci--4081--3--7--8--01d4258341-k8s-calico--apiserver--7885d7c4d8--s4bvw-eth0" Apr 13 19:28:14.042996 containerd[1620]: 2026-04-13 19:28:13.702 [INFO][4165] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="cadde1ec1ceb3caf5ad550f8c515eedf7597dcfa16c10f050b7c4f1359fe8c8e" HandleID="k8s-pod-network.cadde1ec1ceb3caf5ad550f8c515eedf7597dcfa16c10f050b7c4f1359fe8c8e" Workload="ci--4081--3--7--8--01d4258341-k8s-calico--apiserver--7885d7c4d8--s4bvw-eth0" Apr 13 19:28:14.042996 containerd[1620]: 2026-04-13 19:28:13.772 [INFO][4165] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="cadde1ec1ceb3caf5ad550f8c515eedf7597dcfa16c10f050b7c4f1359fe8c8e" HandleID="k8s-pod-network.cadde1ec1ceb3caf5ad550f8c515eedf7597dcfa16c10f050b7c4f1359fe8c8e" Workload="ci--4081--3--7--8--01d4258341-k8s-calico--apiserver--7885d7c4d8--s4bvw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003628c0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-7-8-01d4258341", "pod":"calico-apiserver-7885d7c4d8-s4bvw", "timestamp":"2026-04-13 19:28:13.702300772 +0000 UTC"}, Hostname:"ci-4081-3-7-8-01d4258341", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40003ebce0)} Apr 13 19:28:14.042996 containerd[1620]: 2026-04-13 19:28:13.773 [INFO][4165] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 19:28:14.042996 containerd[1620]: 2026-04-13 19:28:13.773 [INFO][4165] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 19:28:14.042996 containerd[1620]: 2026-04-13 19:28:13.773 [INFO][4165] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-7-8-01d4258341' Apr 13 19:28:14.042996 containerd[1620]: 2026-04-13 19:28:13.796 [INFO][4165] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.cadde1ec1ceb3caf5ad550f8c515eedf7597dcfa16c10f050b7c4f1359fe8c8e" host="ci-4081-3-7-8-01d4258341" Apr 13 19:28:14.042996 containerd[1620]: 2026-04-13 19:28:13.821 [INFO][4165] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081-3-7-8-01d4258341" Apr 13 19:28:14.042996 containerd[1620]: 2026-04-13 19:28:13.852 [INFO][4165] ipam/ipam.go 526: Trying affinity for 192.168.20.0/26 host="ci-4081-3-7-8-01d4258341" Apr 13 19:28:14.042996 containerd[1620]: 2026-04-13 19:28:13.863 [INFO][4165] ipam/ipam.go 160: Attempting to load block cidr=192.168.20.0/26 host="ci-4081-3-7-8-01d4258341" Apr 13 19:28:14.042996 containerd[1620]: 2026-04-13 19:28:13.874 [INFO][4165] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.20.0/26 host="ci-4081-3-7-8-01d4258341" Apr 13 19:28:14.042996 containerd[1620]: 2026-04-13 19:28:13.874 [INFO][4165] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.20.0/26 handle="k8s-pod-network.cadde1ec1ceb3caf5ad550f8c515eedf7597dcfa16c10f050b7c4f1359fe8c8e" host="ci-4081-3-7-8-01d4258341" Apr 13 19:28:14.042996 containerd[1620]: 2026-04-13 19:28:13.879 [INFO][4165] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.cadde1ec1ceb3caf5ad550f8c515eedf7597dcfa16c10f050b7c4f1359fe8c8e Apr 13 19:28:14.042996 containerd[1620]: 2026-04-13 19:28:13.893 [INFO][4165] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.20.0/26 handle="k8s-pod-network.cadde1ec1ceb3caf5ad550f8c515eedf7597dcfa16c10f050b7c4f1359fe8c8e" host="ci-4081-3-7-8-01d4258341" Apr 13 19:28:14.042996 containerd[1620]: 2026-04-13 19:28:13.912 [INFO][4165] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.20.2/26] block=192.168.20.0/26 handle="k8s-pod-network.cadde1ec1ceb3caf5ad550f8c515eedf7597dcfa16c10f050b7c4f1359fe8c8e" host="ci-4081-3-7-8-01d4258341" Apr 13 19:28:14.042996 containerd[1620]: 2026-04-13 19:28:13.912 [INFO][4165] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.20.2/26] handle="k8s-pod-network.cadde1ec1ceb3caf5ad550f8c515eedf7597dcfa16c10f050b7c4f1359fe8c8e" host="ci-4081-3-7-8-01d4258341" Apr 13 19:28:14.042996 containerd[1620]: 2026-04-13 19:28:13.912 [INFO][4165] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 19:28:14.042996 containerd[1620]: 2026-04-13 19:28:13.912 [INFO][4165] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.20.2/26] IPv6=[] ContainerID="cadde1ec1ceb3caf5ad550f8c515eedf7597dcfa16c10f050b7c4f1359fe8c8e" HandleID="k8s-pod-network.cadde1ec1ceb3caf5ad550f8c515eedf7597dcfa16c10f050b7c4f1359fe8c8e" Workload="ci--4081--3--7--8--01d4258341-k8s-calico--apiserver--7885d7c4d8--s4bvw-eth0" Apr 13 19:28:14.044017 containerd[1620]: 2026-04-13 19:28:13.938 [INFO][4103] cni-plugin/k8s.go 418: Populated endpoint ContainerID="cadde1ec1ceb3caf5ad550f8c515eedf7597dcfa16c10f050b7c4f1359fe8c8e" Namespace="calico-system" Pod="calico-apiserver-7885d7c4d8-s4bvw" WorkloadEndpoint="ci--4081--3--7--8--01d4258341-k8s-calico--apiserver--7885d7c4d8--s4bvw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--8--01d4258341-k8s-calico--apiserver--7885d7c4d8--s4bvw-eth0", GenerateName:"calico-apiserver-7885d7c4d8-", Namespace:"calico-system", SelfLink:"", UID:"5e2158f3-f3dc-4de6-bbab-4b66f3e43bd9", ResourceVersion:"873", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 19, 27, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7885d7c4d8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-8-01d4258341", ContainerID:"", Pod:"calico-apiserver-7885d7c4d8-s4bvw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.20.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali4b28647f000", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 19:28:14.044017 containerd[1620]: 2026-04-13 19:28:13.939 [INFO][4103] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.20.2/32] ContainerID="cadde1ec1ceb3caf5ad550f8c515eedf7597dcfa16c10f050b7c4f1359fe8c8e" Namespace="calico-system" Pod="calico-apiserver-7885d7c4d8-s4bvw" WorkloadEndpoint="ci--4081--3--7--8--01d4258341-k8s-calico--apiserver--7885d7c4d8--s4bvw-eth0" Apr 13 19:28:14.044017 containerd[1620]: 2026-04-13 19:28:13.939 [INFO][4103] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4b28647f000 ContainerID="cadde1ec1ceb3caf5ad550f8c515eedf7597dcfa16c10f050b7c4f1359fe8c8e" Namespace="calico-system" Pod="calico-apiserver-7885d7c4d8-s4bvw" WorkloadEndpoint="ci--4081--3--7--8--01d4258341-k8s-calico--apiserver--7885d7c4d8--s4bvw-eth0" Apr 13 19:28:14.044017 containerd[1620]: 2026-04-13 19:28:13.997 [INFO][4103] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="cadde1ec1ceb3caf5ad550f8c515eedf7597dcfa16c10f050b7c4f1359fe8c8e" Namespace="calico-system" Pod="calico-apiserver-7885d7c4d8-s4bvw" WorkloadEndpoint="ci--4081--3--7--8--01d4258341-k8s-calico--apiserver--7885d7c4d8--s4bvw-eth0" Apr 13 19:28:14.044017 containerd[1620]: 2026-04-13 19:28:14.001 [INFO][4103] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="cadde1ec1ceb3caf5ad550f8c515eedf7597dcfa16c10f050b7c4f1359fe8c8e" Namespace="calico-system" Pod="calico-apiserver-7885d7c4d8-s4bvw" WorkloadEndpoint="ci--4081--3--7--8--01d4258341-k8s-calico--apiserver--7885d7c4d8--s4bvw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--8--01d4258341-k8s-calico--apiserver--7885d7c4d8--s4bvw-eth0", GenerateName:"calico-apiserver-7885d7c4d8-", Namespace:"calico-system", SelfLink:"", UID:"5e2158f3-f3dc-4de6-bbab-4b66f3e43bd9", ResourceVersion:"873", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 19, 27, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7885d7c4d8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-8-01d4258341", ContainerID:"cadde1ec1ceb3caf5ad550f8c515eedf7597dcfa16c10f050b7c4f1359fe8c8e", Pod:"calico-apiserver-7885d7c4d8-s4bvw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.20.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali4b28647f000", MAC:"42:86:89:37:17:bd", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 19:28:14.044017 containerd[1620]: 2026-04-13 19:28:14.022 [INFO][4103] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="cadde1ec1ceb3caf5ad550f8c515eedf7597dcfa16c10f050b7c4f1359fe8c8e" Namespace="calico-system" Pod="calico-apiserver-7885d7c4d8-s4bvw" WorkloadEndpoint="ci--4081--3--7--8--01d4258341-k8s-calico--apiserver--7885d7c4d8--s4bvw-eth0" Apr 13 19:28:14.126108 kubelet[2698]: I0413 19:28:14.126059 2698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="060246f3-68f3-47a5-b10e-f5d0dbb7a159" path="/var/lib/kubelet/pods/060246f3-68f3-47a5-b10e-f5d0dbb7a159/volumes" Apr 13 19:28:14.175880 systemd-networkd[1251]: calidea0326cbf5: Link UP Apr 13 19:28:14.184687 systemd-networkd[1251]: calidea0326cbf5: Gained carrier Apr 13 19:28:14.205795 containerd[1620]: time="2026-04-13T19:28:14.205240417Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 19:28:14.205795 containerd[1620]: time="2026-04-13T19:28:14.205372630Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 19:28:14.205795 containerd[1620]: time="2026-04-13T19:28:14.205388712Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:28:14.205795 containerd[1620]: time="2026-04-13T19:28:14.205550168Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:28:14.238028 containerd[1620]: 2026-04-13 19:28:13.064 [ERROR][4071] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 13 19:28:14.238028 containerd[1620]: 2026-04-13 19:28:13.167 [INFO][4071] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--7--8--01d4258341-k8s-calico--apiserver--7885d7c4d8--kjddq-eth0 calico-apiserver-7885d7c4d8- calico-system 5241b13b-999d-4851-9b4e-47561517f354 872 0 2026-04-13 19:27:56 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7885d7c4d8 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-7-8-01d4258341 calico-apiserver-7885d7c4d8-kjddq eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] calidea0326cbf5 [] [] }} ContainerID="ae703df4a9502b5f305544be44901438bd0076121c1d3c73c39f103ea34851e0" Namespace="calico-system" Pod="calico-apiserver-7885d7c4d8-kjddq" WorkloadEndpoint="ci--4081--3--7--8--01d4258341-k8s-calico--apiserver--7885d7c4d8--kjddq-" Apr 13 19:28:14.238028 containerd[1620]: 2026-04-13 19:28:13.170 [INFO][4071] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ae703df4a9502b5f305544be44901438bd0076121c1d3c73c39f103ea34851e0" Namespace="calico-system" Pod="calico-apiserver-7885d7c4d8-kjddq" WorkloadEndpoint="ci--4081--3--7--8--01d4258341-k8s-calico--apiserver--7885d7c4d8--kjddq-eth0" Apr 13 19:28:14.238028 containerd[1620]: 2026-04-13 19:28:13.787 [INFO][4179] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ae703df4a9502b5f305544be44901438bd0076121c1d3c73c39f103ea34851e0" HandleID="k8s-pod-network.ae703df4a9502b5f305544be44901438bd0076121c1d3c73c39f103ea34851e0" Workload="ci--4081--3--7--8--01d4258341-k8s-calico--apiserver--7885d7c4d8--kjddq-eth0" Apr 13 19:28:14.238028 containerd[1620]: 2026-04-13 19:28:13.839 [INFO][4179] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="ae703df4a9502b5f305544be44901438bd0076121c1d3c73c39f103ea34851e0" HandleID="k8s-pod-network.ae703df4a9502b5f305544be44901438bd0076121c1d3c73c39f103ea34851e0" Workload="ci--4081--3--7--8--01d4258341-k8s-calico--apiserver--7885d7c4d8--kjddq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400035f6e0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-7-8-01d4258341", "pod":"calico-apiserver-7885d7c4d8-kjddq", "timestamp":"2026-04-13 19:28:13.777185904 +0000 UTC"}, Hostname:"ci-4081-3-7-8-01d4258341", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x4000296dc0)} Apr 13 19:28:14.238028 containerd[1620]: 2026-04-13 19:28:13.846 [INFO][4179] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 19:28:14.238028 containerd[1620]: 2026-04-13 19:28:13.914 [INFO][4179] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 19:28:14.238028 containerd[1620]: 2026-04-13 19:28:13.917 [INFO][4179] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-7-8-01d4258341' Apr 13 19:28:14.238028 containerd[1620]: 2026-04-13 19:28:13.928 [INFO][4179] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.ae703df4a9502b5f305544be44901438bd0076121c1d3c73c39f103ea34851e0" host="ci-4081-3-7-8-01d4258341" Apr 13 19:28:14.238028 containerd[1620]: 2026-04-13 19:28:13.951 [INFO][4179] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081-3-7-8-01d4258341" Apr 13 19:28:14.238028 containerd[1620]: 2026-04-13 19:28:13.992 [INFO][4179] ipam/ipam.go 526: Trying affinity for 192.168.20.0/26 host="ci-4081-3-7-8-01d4258341" Apr 13 19:28:14.238028 containerd[1620]: 2026-04-13 19:28:14.011 [INFO][4179] ipam/ipam.go 160: Attempting to load block cidr=192.168.20.0/26 host="ci-4081-3-7-8-01d4258341" Apr 13 19:28:14.238028 containerd[1620]: 2026-04-13 19:28:14.028 [INFO][4179] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.20.0/26 host="ci-4081-3-7-8-01d4258341" Apr 13 19:28:14.238028 containerd[1620]: 2026-04-13 19:28:14.028 [INFO][4179] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.20.0/26 handle="k8s-pod-network.ae703df4a9502b5f305544be44901438bd0076121c1d3c73c39f103ea34851e0" host="ci-4081-3-7-8-01d4258341" Apr 13 19:28:14.238028 containerd[1620]: 2026-04-13 19:28:14.045 [INFO][4179] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.ae703df4a9502b5f305544be44901438bd0076121c1d3c73c39f103ea34851e0 Apr 13 19:28:14.238028 containerd[1620]: 2026-04-13 19:28:14.071 [INFO][4179] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.20.0/26 handle="k8s-pod-network.ae703df4a9502b5f305544be44901438bd0076121c1d3c73c39f103ea34851e0" host="ci-4081-3-7-8-01d4258341" Apr 13 19:28:14.238028 containerd[1620]: 2026-04-13 19:28:14.094 [INFO][4179] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.20.3/26] block=192.168.20.0/26 handle="k8s-pod-network.ae703df4a9502b5f305544be44901438bd0076121c1d3c73c39f103ea34851e0" host="ci-4081-3-7-8-01d4258341" Apr 13 19:28:14.238028 containerd[1620]: 2026-04-13 19:28:14.094 [INFO][4179] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.20.3/26] handle="k8s-pod-network.ae703df4a9502b5f305544be44901438bd0076121c1d3c73c39f103ea34851e0" host="ci-4081-3-7-8-01d4258341" Apr 13 19:28:14.238028 containerd[1620]: 2026-04-13 19:28:14.094 [INFO][4179] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 19:28:14.238028 containerd[1620]: 2026-04-13 19:28:14.094 [INFO][4179] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.20.3/26] IPv6=[] ContainerID="ae703df4a9502b5f305544be44901438bd0076121c1d3c73c39f103ea34851e0" HandleID="k8s-pod-network.ae703df4a9502b5f305544be44901438bd0076121c1d3c73c39f103ea34851e0" Workload="ci--4081--3--7--8--01d4258341-k8s-calico--apiserver--7885d7c4d8--kjddq-eth0" Apr 13 19:28:14.242214 containerd[1620]: 2026-04-13 19:28:14.134 [INFO][4071] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ae703df4a9502b5f305544be44901438bd0076121c1d3c73c39f103ea34851e0" Namespace="calico-system" Pod="calico-apiserver-7885d7c4d8-kjddq" WorkloadEndpoint="ci--4081--3--7--8--01d4258341-k8s-calico--apiserver--7885d7c4d8--kjddq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--8--01d4258341-k8s-calico--apiserver--7885d7c4d8--kjddq-eth0", GenerateName:"calico-apiserver-7885d7c4d8-", Namespace:"calico-system", SelfLink:"", UID:"5241b13b-999d-4851-9b4e-47561517f354", ResourceVersion:"872", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 19, 27, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7885d7c4d8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-8-01d4258341", ContainerID:"", Pod:"calico-apiserver-7885d7c4d8-kjddq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.20.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calidea0326cbf5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 19:28:14.242214 containerd[1620]: 2026-04-13 19:28:14.135 [INFO][4071] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.20.3/32] ContainerID="ae703df4a9502b5f305544be44901438bd0076121c1d3c73c39f103ea34851e0" Namespace="calico-system" Pod="calico-apiserver-7885d7c4d8-kjddq" WorkloadEndpoint="ci--4081--3--7--8--01d4258341-k8s-calico--apiserver--7885d7c4d8--kjddq-eth0" Apr 13 19:28:14.242214 containerd[1620]: 2026-04-13 19:28:14.135 [INFO][4071] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calidea0326cbf5 ContainerID="ae703df4a9502b5f305544be44901438bd0076121c1d3c73c39f103ea34851e0" Namespace="calico-system" Pod="calico-apiserver-7885d7c4d8-kjddq" WorkloadEndpoint="ci--4081--3--7--8--01d4258341-k8s-calico--apiserver--7885d7c4d8--kjddq-eth0" Apr 13 19:28:14.242214 containerd[1620]: 2026-04-13 19:28:14.196 [INFO][4071] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ae703df4a9502b5f305544be44901438bd0076121c1d3c73c39f103ea34851e0" Namespace="calico-system" Pod="calico-apiserver-7885d7c4d8-kjddq" WorkloadEndpoint="ci--4081--3--7--8--01d4258341-k8s-calico--apiserver--7885d7c4d8--kjddq-eth0" Apr 13 19:28:14.242214 containerd[1620]: 2026-04-13 19:28:14.200 [INFO][4071] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ae703df4a9502b5f305544be44901438bd0076121c1d3c73c39f103ea34851e0" Namespace="calico-system" Pod="calico-apiserver-7885d7c4d8-kjddq" WorkloadEndpoint="ci--4081--3--7--8--01d4258341-k8s-calico--apiserver--7885d7c4d8--kjddq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--8--01d4258341-k8s-calico--apiserver--7885d7c4d8--kjddq-eth0", GenerateName:"calico-apiserver-7885d7c4d8-", Namespace:"calico-system", SelfLink:"", UID:"5241b13b-999d-4851-9b4e-47561517f354", ResourceVersion:"872", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 19, 27, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7885d7c4d8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-8-01d4258341", ContainerID:"ae703df4a9502b5f305544be44901438bd0076121c1d3c73c39f103ea34851e0", Pod:"calico-apiserver-7885d7c4d8-kjddq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.20.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calidea0326cbf5", MAC:"be:83:4d:67:50:da", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 19:28:14.242214 containerd[1620]: 2026-04-13 19:28:14.224 [INFO][4071] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ae703df4a9502b5f305544be44901438bd0076121c1d3c73c39f103ea34851e0" Namespace="calico-system" Pod="calico-apiserver-7885d7c4d8-kjddq" WorkloadEndpoint="ci--4081--3--7--8--01d4258341-k8s-calico--apiserver--7885d7c4d8--kjddq-eth0" Apr 13 19:28:14.263059 kernel: calico-node[4355]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Apr 13 19:28:14.290513 systemd-networkd[1251]: caliab859c802f1: Link UP Apr 13 19:28:14.295092 systemd-networkd[1251]: caliab859c802f1: Gained carrier Apr 13 19:28:14.340863 containerd[1620]: time="2026-04-13T19:28:14.340121947Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-ft6d4,Uid:781d0594-7e8f-4f56-9e23-737cbe5d8293,Namespace:kube-system,Attempt:1,} returns sandbox id \"2039eea45940860c3fa191454219cdcab7fc4cb988de8f19feacc116e507d863\"" Apr 13 19:28:14.385375 containerd[1620]: 2026-04-13 19:28:13.260 [ERROR][4043] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 13 19:28:14.385375 containerd[1620]: 2026-04-13 19:28:13.359 [INFO][4043] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--7--8--01d4258341-k8s-calico--kube--controllers--669bfd5bbc--9897g-eth0 calico-kube-controllers-669bfd5bbc- calico-system 8dfd1508-5560-4545-9951-14512e76963d 875 0 2026-04-13 19:27:57 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:669bfd5bbc projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081-3-7-8-01d4258341 calico-kube-controllers-669bfd5bbc-9897g eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] caliab859c802f1 [] [] }} ContainerID="64a94d408824bf7ab8749586db221cf9f7036b2d623c5d5f79bfff2a8891b7f1" Namespace="calico-system" Pod="calico-kube-controllers-669bfd5bbc-9897g" WorkloadEndpoint="ci--4081--3--7--8--01d4258341-k8s-calico--kube--controllers--669bfd5bbc--9897g-" Apr 13 19:28:14.385375 containerd[1620]: 2026-04-13 19:28:13.359 [INFO][4043] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="64a94d408824bf7ab8749586db221cf9f7036b2d623c5d5f79bfff2a8891b7f1" Namespace="calico-system" Pod="calico-kube-controllers-669bfd5bbc-9897g" WorkloadEndpoint="ci--4081--3--7--8--01d4258341-k8s-calico--kube--controllers--669bfd5bbc--9897g-eth0" Apr 13 19:28:14.385375 containerd[1620]: 2026-04-13 19:28:13.829 [INFO][4231] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="64a94d408824bf7ab8749586db221cf9f7036b2d623c5d5f79bfff2a8891b7f1" HandleID="k8s-pod-network.64a94d408824bf7ab8749586db221cf9f7036b2d623c5d5f79bfff2a8891b7f1" Workload="ci--4081--3--7--8--01d4258341-k8s-calico--kube--controllers--669bfd5bbc--9897g-eth0" Apr 13 19:28:14.385375 containerd[1620]: 2026-04-13 19:28:13.907 [INFO][4231] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="64a94d408824bf7ab8749586db221cf9f7036b2d623c5d5f79bfff2a8891b7f1" HandleID="k8s-pod-network.64a94d408824bf7ab8749586db221cf9f7036b2d623c5d5f79bfff2a8891b7f1" Workload="ci--4081--3--7--8--01d4258341-k8s-calico--kube--controllers--669bfd5bbc--9897g-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000123320), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-7-8-01d4258341", "pod":"calico-kube-controllers-669bfd5bbc-9897g", "timestamp":"2026-04-13 19:28:13.829490873 +0000 UTC"}, Hostname:"ci-4081-3-7-8-01d4258341", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40004e4160)} Apr 13 19:28:14.385375 containerd[1620]: 2026-04-13 19:28:13.907 [INFO][4231] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 19:28:14.385375 containerd[1620]: 2026-04-13 19:28:14.097 [INFO][4231] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 19:28:14.385375 containerd[1620]: 2026-04-13 19:28:14.097 [INFO][4231] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-7-8-01d4258341' Apr 13 19:28:14.385375 containerd[1620]: 2026-04-13 19:28:14.118 [INFO][4231] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.64a94d408824bf7ab8749586db221cf9f7036b2d623c5d5f79bfff2a8891b7f1" host="ci-4081-3-7-8-01d4258341" Apr 13 19:28:14.385375 containerd[1620]: 2026-04-13 19:28:14.152 [INFO][4231] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081-3-7-8-01d4258341" Apr 13 19:28:14.385375 containerd[1620]: 2026-04-13 19:28:14.181 [INFO][4231] ipam/ipam.go 526: Trying affinity for 192.168.20.0/26 host="ci-4081-3-7-8-01d4258341" Apr 13 19:28:14.385375 containerd[1620]: 2026-04-13 19:28:14.191 [INFO][4231] ipam/ipam.go 160: Attempting to load block cidr=192.168.20.0/26 host="ci-4081-3-7-8-01d4258341" Apr 13 19:28:14.385375 containerd[1620]: 2026-04-13 19:28:14.202 [INFO][4231] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.20.0/26 host="ci-4081-3-7-8-01d4258341" Apr 13 19:28:14.385375 containerd[1620]: 2026-04-13 19:28:14.202 [INFO][4231] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.20.0/26 handle="k8s-pod-network.64a94d408824bf7ab8749586db221cf9f7036b2d623c5d5f79bfff2a8891b7f1" host="ci-4081-3-7-8-01d4258341" Apr 13 19:28:14.385375 containerd[1620]: 2026-04-13 19:28:14.212 [INFO][4231] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.64a94d408824bf7ab8749586db221cf9f7036b2d623c5d5f79bfff2a8891b7f1 Apr 13 19:28:14.385375 containerd[1620]: 2026-04-13 19:28:14.239 [INFO][4231] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.20.0/26 handle="k8s-pod-network.64a94d408824bf7ab8749586db221cf9f7036b2d623c5d5f79bfff2a8891b7f1" host="ci-4081-3-7-8-01d4258341" Apr 13 19:28:14.385375 containerd[1620]: 2026-04-13 19:28:14.249 [INFO][4231] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.20.4/26] block=192.168.20.0/26 handle="k8s-pod-network.64a94d408824bf7ab8749586db221cf9f7036b2d623c5d5f79bfff2a8891b7f1" host="ci-4081-3-7-8-01d4258341" Apr 13 19:28:14.385375 containerd[1620]: 2026-04-13 19:28:14.250 [INFO][4231] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.20.4/26] handle="k8s-pod-network.64a94d408824bf7ab8749586db221cf9f7036b2d623c5d5f79bfff2a8891b7f1" host="ci-4081-3-7-8-01d4258341" Apr 13 19:28:14.385375 containerd[1620]: 2026-04-13 19:28:14.250 [INFO][4231] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 19:28:14.385375 containerd[1620]: 2026-04-13 19:28:14.250 [INFO][4231] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.20.4/26] IPv6=[] ContainerID="64a94d408824bf7ab8749586db221cf9f7036b2d623c5d5f79bfff2a8891b7f1" HandleID="k8s-pod-network.64a94d408824bf7ab8749586db221cf9f7036b2d623c5d5f79bfff2a8891b7f1" Workload="ci--4081--3--7--8--01d4258341-k8s-calico--kube--controllers--669bfd5bbc--9897g-eth0" Apr 13 19:28:14.388456 containerd[1620]: 2026-04-13 19:28:14.273 [INFO][4043] cni-plugin/k8s.go 418: Populated endpoint ContainerID="64a94d408824bf7ab8749586db221cf9f7036b2d623c5d5f79bfff2a8891b7f1" Namespace="calico-system" Pod="calico-kube-controllers-669bfd5bbc-9897g" WorkloadEndpoint="ci--4081--3--7--8--01d4258341-k8s-calico--kube--controllers--669bfd5bbc--9897g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--8--01d4258341-k8s-calico--kube--controllers--669bfd5bbc--9897g-eth0", GenerateName:"calico-kube-controllers-669bfd5bbc-", Namespace:"calico-system", SelfLink:"", UID:"8dfd1508-5560-4545-9951-14512e76963d", ResourceVersion:"875", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 19, 27, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"669bfd5bbc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-8-01d4258341", ContainerID:"", Pod:"calico-kube-controllers-669bfd5bbc-9897g", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.20.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliab859c802f1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 19:28:14.388456 containerd[1620]: 2026-04-13 19:28:14.275 [INFO][4043] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.20.4/32] ContainerID="64a94d408824bf7ab8749586db221cf9f7036b2d623c5d5f79bfff2a8891b7f1" Namespace="calico-system" Pod="calico-kube-controllers-669bfd5bbc-9897g" WorkloadEndpoint="ci--4081--3--7--8--01d4258341-k8s-calico--kube--controllers--669bfd5bbc--9897g-eth0" Apr 13 19:28:14.388456 containerd[1620]: 2026-04-13 19:28:14.277 [INFO][4043] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliab859c802f1 ContainerID="64a94d408824bf7ab8749586db221cf9f7036b2d623c5d5f79bfff2a8891b7f1" Namespace="calico-system" Pod="calico-kube-controllers-669bfd5bbc-9897g" WorkloadEndpoint="ci--4081--3--7--8--01d4258341-k8s-calico--kube--controllers--669bfd5bbc--9897g-eth0" Apr 13 19:28:14.388456 containerd[1620]: 2026-04-13 19:28:14.323 [INFO][4043] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="64a94d408824bf7ab8749586db221cf9f7036b2d623c5d5f79bfff2a8891b7f1" Namespace="calico-system" Pod="calico-kube-controllers-669bfd5bbc-9897g" WorkloadEndpoint="ci--4081--3--7--8--01d4258341-k8s-calico--kube--controllers--669bfd5bbc--9897g-eth0" Apr 13 19:28:14.388456 containerd[1620]: 2026-04-13 19:28:14.326 [INFO][4043] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="64a94d408824bf7ab8749586db221cf9f7036b2d623c5d5f79bfff2a8891b7f1" Namespace="calico-system" Pod="calico-kube-controllers-669bfd5bbc-9897g" WorkloadEndpoint="ci--4081--3--7--8--01d4258341-k8s-calico--kube--controllers--669bfd5bbc--9897g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--8--01d4258341-k8s-calico--kube--controllers--669bfd5bbc--9897g-eth0", GenerateName:"calico-kube-controllers-669bfd5bbc-", Namespace:"calico-system", SelfLink:"", UID:"8dfd1508-5560-4545-9951-14512e76963d", ResourceVersion:"875", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 19, 27, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"669bfd5bbc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-8-01d4258341", ContainerID:"64a94d408824bf7ab8749586db221cf9f7036b2d623c5d5f79bfff2a8891b7f1", Pod:"calico-kube-controllers-669bfd5bbc-9897g", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.20.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliab859c802f1", MAC:"8e:b2:2b:94:1a:42", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 19:28:14.388456 containerd[1620]: 2026-04-13 19:28:14.357 [INFO][4043] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="64a94d408824bf7ab8749586db221cf9f7036b2d623c5d5f79bfff2a8891b7f1" Namespace="calico-system" Pod="calico-kube-controllers-669bfd5bbc-9897g" WorkloadEndpoint="ci--4081--3--7--8--01d4258341-k8s-calico--kube--controllers--669bfd5bbc--9897g-eth0" Apr 13 19:28:14.424822 containerd[1620]: time="2026-04-13T19:28:14.424592379Z" level=info msg="CreateContainer within sandbox \"2039eea45940860c3fa191454219cdcab7fc4cb988de8f19feacc116e507d863\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 13 19:28:14.462714 containerd[1620]: time="2026-04-13T19:28:14.455432960Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 19:28:14.462714 containerd[1620]: time="2026-04-13T19:28:14.455521688Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 19:28:14.462714 containerd[1620]: time="2026-04-13T19:28:14.455568053Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:28:14.462714 containerd[1620]: time="2026-04-13T19:28:14.456502104Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:28:14.530406 containerd[1620]: time="2026-04-13T19:28:14.525125345Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7885d7c4d8-s4bvw,Uid:5e2158f3-f3dc-4de6-bbab-4b66f3e43bd9,Namespace:calico-system,Attempt:1,} returns sandbox id \"cadde1ec1ceb3caf5ad550f8c515eedf7597dcfa16c10f050b7c4f1359fe8c8e\"" Apr 13 19:28:14.538905 systemd-networkd[1251]: cali5aa06af23e8: Link UP Apr 13 19:28:14.539688 systemd-networkd[1251]: cali5aa06af23e8: Gained carrier Apr 13 19:28:14.559217 containerd[1620]: time="2026-04-13T19:28:14.559171679Z" level=info msg="CreateContainer within sandbox \"2039eea45940860c3fa191454219cdcab7fc4cb988de8f19feacc116e507d863\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"0c357712fc42e632cfa1dd1e12e38d87415d7441e71b5c1ba433f0bb69271d97\"" Apr 13 19:28:14.560156 containerd[1620]: time="2026-04-13T19:28:14.559965957Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Apr 13 19:28:14.561666 containerd[1620]: time="2026-04-13T19:28:14.560428802Z" level=info msg="StartContainer for \"0c357712fc42e632cfa1dd1e12e38d87415d7441e71b5c1ba433f0bb69271d97\"" Apr 13 19:28:14.610192 containerd[1620]: 2026-04-13 19:28:13.268 [ERROR][4048] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 13 19:28:14.610192 containerd[1620]: 2026-04-13 19:28:13.350 [INFO][4048] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--7--8--01d4258341-k8s-goldmane--5b85766d88--7vqgk-eth0 goldmane-5b85766d88- calico-system 8159562d-f09d-4214-b84d-626c5db8278b 874 0 2026-04-13 19:27:56 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:5b85766d88 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4081-3-7-8-01d4258341 goldmane-5b85766d88-7vqgk eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali5aa06af23e8 [] [] }} ContainerID="a51f4062842df027cb8905ea2955c3f595feafbb965e746cc05fddff3a6796f7" Namespace="calico-system" Pod="goldmane-5b85766d88-7vqgk" WorkloadEndpoint="ci--4081--3--7--8--01d4258341-k8s-goldmane--5b85766d88--7vqgk-" Apr 13 19:28:14.610192 containerd[1620]: 2026-04-13 19:28:13.350 [INFO][4048] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a51f4062842df027cb8905ea2955c3f595feafbb965e746cc05fddff3a6796f7" Namespace="calico-system" Pod="goldmane-5b85766d88-7vqgk" WorkloadEndpoint="ci--4081--3--7--8--01d4258341-k8s-goldmane--5b85766d88--7vqgk-eth0" Apr 13 19:28:14.610192 containerd[1620]: 2026-04-13 19:28:13.828 [INFO][4232] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a51f4062842df027cb8905ea2955c3f595feafbb965e746cc05fddff3a6796f7" HandleID="k8s-pod-network.a51f4062842df027cb8905ea2955c3f595feafbb965e746cc05fddff3a6796f7" Workload="ci--4081--3--7--8--01d4258341-k8s-goldmane--5b85766d88--7vqgk-eth0" Apr 13 19:28:14.610192 containerd[1620]: 2026-04-13 19:28:13.915 [INFO][4232] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="a51f4062842df027cb8905ea2955c3f595feafbb965e746cc05fddff3a6796f7" HandleID="k8s-pod-network.a51f4062842df027cb8905ea2955c3f595feafbb965e746cc05fddff3a6796f7" Workload="ci--4081--3--7--8--01d4258341-k8s-goldmane--5b85766d88--7vqgk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000109490), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-7-8-01d4258341", "pod":"goldmane-5b85766d88-7vqgk", "timestamp":"2026-04-13 19:28:13.828138816 +0000 UTC"}, Hostname:"ci-4081-3-7-8-01d4258341", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x400038c580)} Apr 13 19:28:14.610192 containerd[1620]: 2026-04-13 19:28:13.915 [INFO][4232] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 19:28:14.610192 containerd[1620]: 2026-04-13 19:28:14.255 [INFO][4232] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 19:28:14.610192 containerd[1620]: 2026-04-13 19:28:14.255 [INFO][4232] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-7-8-01d4258341' Apr 13 19:28:14.610192 containerd[1620]: 2026-04-13 19:28:14.272 [INFO][4232] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.a51f4062842df027cb8905ea2955c3f595feafbb965e746cc05fddff3a6796f7" host="ci-4081-3-7-8-01d4258341" Apr 13 19:28:14.610192 containerd[1620]: 2026-04-13 19:28:14.296 [INFO][4232] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081-3-7-8-01d4258341" Apr 13 19:28:14.610192 containerd[1620]: 2026-04-13 19:28:14.331 [INFO][4232] ipam/ipam.go 526: Trying affinity for 192.168.20.0/26 host="ci-4081-3-7-8-01d4258341" Apr 13 19:28:14.610192 containerd[1620]: 2026-04-13 19:28:14.339 [INFO][4232] ipam/ipam.go 160: Attempting to load block cidr=192.168.20.0/26 host="ci-4081-3-7-8-01d4258341" Apr 13 19:28:14.610192 containerd[1620]: 2026-04-13 19:28:14.364 [INFO][4232] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.20.0/26 host="ci-4081-3-7-8-01d4258341" Apr 13 19:28:14.610192 containerd[1620]: 2026-04-13 19:28:14.364 [INFO][4232] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.20.0/26 handle="k8s-pod-network.a51f4062842df027cb8905ea2955c3f595feafbb965e746cc05fddff3a6796f7" host="ci-4081-3-7-8-01d4258341" Apr 13 19:28:14.610192 containerd[1620]: 2026-04-13 19:28:14.373 [INFO][4232] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.a51f4062842df027cb8905ea2955c3f595feafbb965e746cc05fddff3a6796f7 Apr 13 19:28:14.610192 containerd[1620]: 2026-04-13 19:28:14.418 [INFO][4232] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.20.0/26 handle="k8s-pod-network.a51f4062842df027cb8905ea2955c3f595feafbb965e746cc05fddff3a6796f7" host="ci-4081-3-7-8-01d4258341" Apr 13 19:28:14.610192 containerd[1620]: 2026-04-13 19:28:14.441 [INFO][4232] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.20.5/26] block=192.168.20.0/26 handle="k8s-pod-network.a51f4062842df027cb8905ea2955c3f595feafbb965e746cc05fddff3a6796f7" host="ci-4081-3-7-8-01d4258341" Apr 13 19:28:14.610192 containerd[1620]: 2026-04-13 19:28:14.441 [INFO][4232] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.20.5/26] handle="k8s-pod-network.a51f4062842df027cb8905ea2955c3f595feafbb965e746cc05fddff3a6796f7" host="ci-4081-3-7-8-01d4258341" Apr 13 19:28:14.610192 containerd[1620]: 2026-04-13 19:28:14.442 [INFO][4232] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 19:28:14.610192 containerd[1620]: 2026-04-13 19:28:14.443 [INFO][4232] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.20.5/26] IPv6=[] ContainerID="a51f4062842df027cb8905ea2955c3f595feafbb965e746cc05fddff3a6796f7" HandleID="k8s-pod-network.a51f4062842df027cb8905ea2955c3f595feafbb965e746cc05fddff3a6796f7" Workload="ci--4081--3--7--8--01d4258341-k8s-goldmane--5b85766d88--7vqgk-eth0" Apr 13 19:28:14.611204 containerd[1620]: 2026-04-13 19:28:14.487 [INFO][4048] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a51f4062842df027cb8905ea2955c3f595feafbb965e746cc05fddff3a6796f7" Namespace="calico-system" Pod="goldmane-5b85766d88-7vqgk" WorkloadEndpoint="ci--4081--3--7--8--01d4258341-k8s-goldmane--5b85766d88--7vqgk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--8--01d4258341-k8s-goldmane--5b85766d88--7vqgk-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"8159562d-f09d-4214-b84d-626c5db8278b", ResourceVersion:"874", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 19, 27, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-8-01d4258341", ContainerID:"", Pod:"goldmane-5b85766d88-7vqgk", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.20.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali5aa06af23e8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 19:28:14.611204 containerd[1620]: 2026-04-13 19:28:14.488 [INFO][4048] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.20.5/32] ContainerID="a51f4062842df027cb8905ea2955c3f595feafbb965e746cc05fddff3a6796f7" Namespace="calico-system" Pod="goldmane-5b85766d88-7vqgk" WorkloadEndpoint="ci--4081--3--7--8--01d4258341-k8s-goldmane--5b85766d88--7vqgk-eth0" Apr 13 19:28:14.611204 containerd[1620]: 2026-04-13 19:28:14.488 [INFO][4048] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5aa06af23e8 ContainerID="a51f4062842df027cb8905ea2955c3f595feafbb965e746cc05fddff3a6796f7" Namespace="calico-system" Pod="goldmane-5b85766d88-7vqgk" WorkloadEndpoint="ci--4081--3--7--8--01d4258341-k8s-goldmane--5b85766d88--7vqgk-eth0" Apr 13 19:28:14.611204 containerd[1620]: 2026-04-13 19:28:14.538 [INFO][4048] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a51f4062842df027cb8905ea2955c3f595feafbb965e746cc05fddff3a6796f7" Namespace="calico-system" Pod="goldmane-5b85766d88-7vqgk" WorkloadEndpoint="ci--4081--3--7--8--01d4258341-k8s-goldmane--5b85766d88--7vqgk-eth0" Apr 13 19:28:14.611204 containerd[1620]: 2026-04-13 19:28:14.546 [INFO][4048] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a51f4062842df027cb8905ea2955c3f595feafbb965e746cc05fddff3a6796f7" Namespace="calico-system" Pod="goldmane-5b85766d88-7vqgk" WorkloadEndpoint="ci--4081--3--7--8--01d4258341-k8s-goldmane--5b85766d88--7vqgk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--8--01d4258341-k8s-goldmane--5b85766d88--7vqgk-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"8159562d-f09d-4214-b84d-626c5db8278b", ResourceVersion:"874", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 19, 27, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-8-01d4258341", ContainerID:"a51f4062842df027cb8905ea2955c3f595feafbb965e746cc05fddff3a6796f7", Pod:"goldmane-5b85766d88-7vqgk", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.20.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali5aa06af23e8", MAC:"4e:76:c2:78:5d:a8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 19:28:14.611204 containerd[1620]: 2026-04-13 19:28:14.584 [INFO][4048] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a51f4062842df027cb8905ea2955c3f595feafbb965e746cc05fddff3a6796f7" Namespace="calico-system" Pod="goldmane-5b85766d88-7vqgk" WorkloadEndpoint="ci--4081--3--7--8--01d4258341-k8s-goldmane--5b85766d88--7vqgk-eth0" Apr 13 19:28:14.684026 containerd[1620]: time="2026-04-13T19:28:14.683083894Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 19:28:14.687725 containerd[1620]: time="2026-04-13T19:28:14.685444285Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 19:28:14.689040 containerd[1620]: time="2026-04-13T19:28:14.687696466Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:28:14.697809 containerd[1620]: time="2026-04-13T19:28:14.694001523Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:28:14.708117 systemd-networkd[1251]: calibef3683f27e: Link UP Apr 13 19:28:14.713187 systemd-networkd[1251]: calibef3683f27e: Gained carrier Apr 13 19:28:14.800273 containerd[1620]: 2026-04-13 19:28:13.384 [ERROR][4079] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 13 19:28:14.800273 containerd[1620]: 2026-04-13 19:28:13.612 [INFO][4079] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--7--8--01d4258341-k8s-coredns--674b8bbfcf--7z59s-eth0 coredns-674b8bbfcf- kube-system dd51994b-5cb8-4cf8-81b2-1ad58037a2fe 879 0 2026-04-13 19:27:43 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-3-7-8-01d4258341 coredns-674b8bbfcf-7z59s eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calibef3683f27e [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="f015d5ec4e7308ddaca9bc5a9cfd277d1bf6294e8d9e52bda3c4feccd27219ce" Namespace="kube-system" Pod="coredns-674b8bbfcf-7z59s" WorkloadEndpoint="ci--4081--3--7--8--01d4258341-k8s-coredns--674b8bbfcf--7z59s-" Apr 13 19:28:14.800273 containerd[1620]: 2026-04-13 19:28:13.614 [INFO][4079] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f015d5ec4e7308ddaca9bc5a9cfd277d1bf6294e8d9e52bda3c4feccd27219ce" Namespace="kube-system" Pod="coredns-674b8bbfcf-7z59s" WorkloadEndpoint="ci--4081--3--7--8--01d4258341-k8s-coredns--674b8bbfcf--7z59s-eth0" Apr 13 19:28:14.800273 containerd[1620]: 2026-04-13 19:28:13.983 [INFO][4260] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f015d5ec4e7308ddaca9bc5a9cfd277d1bf6294e8d9e52bda3c4feccd27219ce" HandleID="k8s-pod-network.f015d5ec4e7308ddaca9bc5a9cfd277d1bf6294e8d9e52bda3c4feccd27219ce" Workload="ci--4081--3--7--8--01d4258341-k8s-coredns--674b8bbfcf--7z59s-eth0" Apr 13 19:28:14.800273 containerd[1620]: 2026-04-13 19:28:14.079 [INFO][4260] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="f015d5ec4e7308ddaca9bc5a9cfd277d1bf6294e8d9e52bda3c4feccd27219ce" HandleID="k8s-pod-network.f015d5ec4e7308ddaca9bc5a9cfd277d1bf6294e8d9e52bda3c4feccd27219ce" Workload="ci--4081--3--7--8--01d4258341-k8s-coredns--674b8bbfcf--7z59s-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40005ce2e0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-3-7-8-01d4258341", "pod":"coredns-674b8bbfcf-7z59s", "timestamp":"2026-04-13 19:28:13.983079883 +0000 UTC"}, Hostname:"ci-4081-3-7-8-01d4258341", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x4000386000)} Apr 13 19:28:14.800273 containerd[1620]: 2026-04-13 19:28:14.084 [INFO][4260] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 19:28:14.800273 containerd[1620]: 2026-04-13 19:28:14.442 [INFO][4260] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 19:28:14.800273 containerd[1620]: 2026-04-13 19:28:14.442 [INFO][4260] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-7-8-01d4258341' Apr 13 19:28:14.800273 containerd[1620]: 2026-04-13 19:28:14.471 [INFO][4260] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.f015d5ec4e7308ddaca9bc5a9cfd277d1bf6294e8d9e52bda3c4feccd27219ce" host="ci-4081-3-7-8-01d4258341" Apr 13 19:28:14.800273 containerd[1620]: 2026-04-13 19:28:14.523 [INFO][4260] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081-3-7-8-01d4258341" Apr 13 19:28:14.800273 containerd[1620]: 2026-04-13 19:28:14.568 [INFO][4260] ipam/ipam.go 526: Trying affinity for 192.168.20.0/26 host="ci-4081-3-7-8-01d4258341" Apr 13 19:28:14.800273 containerd[1620]: 2026-04-13 19:28:14.584 [INFO][4260] ipam/ipam.go 160: Attempting to load block cidr=192.168.20.0/26 host="ci-4081-3-7-8-01d4258341" Apr 13 19:28:14.800273 containerd[1620]: 2026-04-13 19:28:14.612 [INFO][4260] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.20.0/26 host="ci-4081-3-7-8-01d4258341" Apr 13 19:28:14.800273 containerd[1620]: 2026-04-13 19:28:14.612 [INFO][4260] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.20.0/26 handle="k8s-pod-network.f015d5ec4e7308ddaca9bc5a9cfd277d1bf6294e8d9e52bda3c4feccd27219ce" host="ci-4081-3-7-8-01d4258341" Apr 13 19:28:14.800273 containerd[1620]: 2026-04-13 19:28:14.618 [INFO][4260] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.f015d5ec4e7308ddaca9bc5a9cfd277d1bf6294e8d9e52bda3c4feccd27219ce Apr 13 19:28:14.800273 containerd[1620]: 2026-04-13 19:28:14.627 [INFO][4260] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.20.0/26 handle="k8s-pod-network.f015d5ec4e7308ddaca9bc5a9cfd277d1bf6294e8d9e52bda3c4feccd27219ce" host="ci-4081-3-7-8-01d4258341" Apr 13 19:28:14.800273 containerd[1620]: 2026-04-13 19:28:14.639 [INFO][4260] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.20.6/26] block=192.168.20.0/26 handle="k8s-pod-network.f015d5ec4e7308ddaca9bc5a9cfd277d1bf6294e8d9e52bda3c4feccd27219ce" host="ci-4081-3-7-8-01d4258341" Apr 13 19:28:14.800273 containerd[1620]: 2026-04-13 19:28:14.639 [INFO][4260] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.20.6/26] handle="k8s-pod-network.f015d5ec4e7308ddaca9bc5a9cfd277d1bf6294e8d9e52bda3c4feccd27219ce" host="ci-4081-3-7-8-01d4258341" Apr 13 19:28:14.800273 containerd[1620]: 2026-04-13 19:28:14.639 [INFO][4260] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 19:28:14.800273 containerd[1620]: 2026-04-13 19:28:14.640 [INFO][4260] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.20.6/26] IPv6=[] ContainerID="f015d5ec4e7308ddaca9bc5a9cfd277d1bf6294e8d9e52bda3c4feccd27219ce" HandleID="k8s-pod-network.f015d5ec4e7308ddaca9bc5a9cfd277d1bf6294e8d9e52bda3c4feccd27219ce" Workload="ci--4081--3--7--8--01d4258341-k8s-coredns--674b8bbfcf--7z59s-eth0" Apr 13 19:28:14.800849 containerd[1620]: 2026-04-13 19:28:14.672 [INFO][4079] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f015d5ec4e7308ddaca9bc5a9cfd277d1bf6294e8d9e52bda3c4feccd27219ce" Namespace="kube-system" Pod="coredns-674b8bbfcf-7z59s" WorkloadEndpoint="ci--4081--3--7--8--01d4258341-k8s-coredns--674b8bbfcf--7z59s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--8--01d4258341-k8s-coredns--674b8bbfcf--7z59s-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"dd51994b-5cb8-4cf8-81b2-1ad58037a2fe", ResourceVersion:"879", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 19, 27, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-8-01d4258341", ContainerID:"", Pod:"coredns-674b8bbfcf-7z59s", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.20.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calibef3683f27e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 19:28:14.800849 containerd[1620]: 2026-04-13 19:28:14.673 [INFO][4079] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.20.6/32] ContainerID="f015d5ec4e7308ddaca9bc5a9cfd277d1bf6294e8d9e52bda3c4feccd27219ce" Namespace="kube-system" Pod="coredns-674b8bbfcf-7z59s" WorkloadEndpoint="ci--4081--3--7--8--01d4258341-k8s-coredns--674b8bbfcf--7z59s-eth0" Apr 13 19:28:14.800849 containerd[1620]: 2026-04-13 19:28:14.673 [INFO][4079] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibef3683f27e ContainerID="f015d5ec4e7308ddaca9bc5a9cfd277d1bf6294e8d9e52bda3c4feccd27219ce" Namespace="kube-system" Pod="coredns-674b8bbfcf-7z59s" WorkloadEndpoint="ci--4081--3--7--8--01d4258341-k8s-coredns--674b8bbfcf--7z59s-eth0" Apr 13 19:28:14.800849 containerd[1620]: 2026-04-13 19:28:14.712 [INFO][4079] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f015d5ec4e7308ddaca9bc5a9cfd277d1bf6294e8d9e52bda3c4feccd27219ce" Namespace="kube-system" Pod="coredns-674b8bbfcf-7z59s" WorkloadEndpoint="ci--4081--3--7--8--01d4258341-k8s-coredns--674b8bbfcf--7z59s-eth0" Apr 13 19:28:14.800849 containerd[1620]: 2026-04-13 19:28:14.763 [INFO][4079] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f015d5ec4e7308ddaca9bc5a9cfd277d1bf6294e8d9e52bda3c4feccd27219ce" Namespace="kube-system" Pod="coredns-674b8bbfcf-7z59s" WorkloadEndpoint="ci--4081--3--7--8--01d4258341-k8s-coredns--674b8bbfcf--7z59s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--8--01d4258341-k8s-coredns--674b8bbfcf--7z59s-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"dd51994b-5cb8-4cf8-81b2-1ad58037a2fe", ResourceVersion:"879", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 19, 27, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-8-01d4258341", ContainerID:"f015d5ec4e7308ddaca9bc5a9cfd277d1bf6294e8d9e52bda3c4feccd27219ce", Pod:"coredns-674b8bbfcf-7z59s", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.20.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calibef3683f27e", MAC:"52:56:dc:1a:9e:3c", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 19:28:14.800849 containerd[1620]: 2026-04-13 19:28:14.788 [INFO][4079] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f015d5ec4e7308ddaca9bc5a9cfd277d1bf6294e8d9e52bda3c4feccd27219ce" Namespace="kube-system" Pod="coredns-674b8bbfcf-7z59s" WorkloadEndpoint="ci--4081--3--7--8--01d4258341-k8s-coredns--674b8bbfcf--7z59s-eth0" Apr 13 19:28:14.805074 systemd-networkd[1251]: cali6e423b9195c: Gained IPv6LL Apr 13 19:28:14.856312 containerd[1620]: time="2026-04-13T19:28:14.856034832Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7885d7c4d8-kjddq,Uid:5241b13b-999d-4851-9b4e-47561517f354,Namespace:calico-system,Attempt:1,} returns sandbox id \"ae703df4a9502b5f305544be44901438bd0076121c1d3c73c39f103ea34851e0\"" Apr 13 19:28:14.867174 systemd-networkd[1251]: califd3e778afe9: Link UP Apr 13 19:28:14.877695 systemd-networkd[1251]: califd3e778afe9: Gained carrier Apr 13 19:28:14.940525 containerd[1620]: time="2026-04-13T19:28:14.939301147Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 19:28:14.940525 containerd[1620]: time="2026-04-13T19:28:14.939392156Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 19:28:14.940525 containerd[1620]: time="2026-04-13T19:28:14.939407917Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:28:14.940525 containerd[1620]: time="2026-04-13T19:28:14.939500966Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:28:14.940815 containerd[1620]: 2026-04-13 19:28:13.433 [ERROR][4167] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 13 19:28:14.940815 containerd[1620]: 2026-04-13 19:28:13.654 [INFO][4167] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--7--8--01d4258341-k8s-csi--node--driver--xdt82-eth0 csi-node-driver- calico-system cb288a13-6f3a-4bf1-9058-ef9f893933ae 878 0 2026-04-13 19:27:57 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6d9d697c7c k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081-3-7-8-01d4258341 csi-node-driver-xdt82 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] califd3e778afe9 [] [] }} ContainerID="ac9ff70f3ca24659a7f8ff991e226ca45e68357a59c8869235e63fbbb87eece4" Namespace="calico-system" Pod="csi-node-driver-xdt82" WorkloadEndpoint="ci--4081--3--7--8--01d4258341-k8s-csi--node--driver--xdt82-" Apr 13 19:28:14.940815 containerd[1620]: 2026-04-13 19:28:13.658 [INFO][4167] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ac9ff70f3ca24659a7f8ff991e226ca45e68357a59c8869235e63fbbb87eece4" Namespace="calico-system" Pod="csi-node-driver-xdt82" WorkloadEndpoint="ci--4081--3--7--8--01d4258341-k8s-csi--node--driver--xdt82-eth0" Apr 13 19:28:14.940815 containerd[1620]: 2026-04-13 19:28:14.087 [INFO][4272] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ac9ff70f3ca24659a7f8ff991e226ca45e68357a59c8869235e63fbbb87eece4" HandleID="k8s-pod-network.ac9ff70f3ca24659a7f8ff991e226ca45e68357a59c8869235e63fbbb87eece4" Workload="ci--4081--3--7--8--01d4258341-k8s-csi--node--driver--xdt82-eth0" Apr 13 19:28:14.940815 containerd[1620]: 2026-04-13 19:28:14.147 [INFO][4272] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="ac9ff70f3ca24659a7f8ff991e226ca45e68357a59c8869235e63fbbb87eece4" HandleID="k8s-pod-network.ac9ff70f3ca24659a7f8ff991e226ca45e68357a59c8869235e63fbbb87eece4" Workload="ci--4081--3--7--8--01d4258341-k8s-csi--node--driver--xdt82-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000393a80), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-7-8-01d4258341", "pod":"csi-node-driver-xdt82", "timestamp":"2026-04-13 19:28:14.087709427 +0000 UTC"}, Hostname:"ci-4081-3-7-8-01d4258341", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40004e89a0)} Apr 13 19:28:14.940815 containerd[1620]: 2026-04-13 19:28:14.153 [INFO][4272] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 19:28:14.940815 containerd[1620]: 2026-04-13 19:28:14.640 [INFO][4272] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 19:28:14.940815 containerd[1620]: 2026-04-13 19:28:14.640 [INFO][4272] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-7-8-01d4258341' Apr 13 19:28:14.940815 containerd[1620]: 2026-04-13 19:28:14.643 [INFO][4272] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.ac9ff70f3ca24659a7f8ff991e226ca45e68357a59c8869235e63fbbb87eece4" host="ci-4081-3-7-8-01d4258341" Apr 13 19:28:14.940815 containerd[1620]: 2026-04-13 19:28:14.668 [INFO][4272] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081-3-7-8-01d4258341" Apr 13 19:28:14.940815 containerd[1620]: 2026-04-13 19:28:14.725 [INFO][4272] ipam/ipam.go 526: Trying affinity for 192.168.20.0/26 host="ci-4081-3-7-8-01d4258341" Apr 13 19:28:14.940815 containerd[1620]: 2026-04-13 19:28:14.742 [INFO][4272] ipam/ipam.go 160: Attempting to load block cidr=192.168.20.0/26 host="ci-4081-3-7-8-01d4258341" Apr 13 19:28:14.940815 containerd[1620]: 2026-04-13 19:28:14.754 [INFO][4272] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.20.0/26 host="ci-4081-3-7-8-01d4258341" Apr 13 19:28:14.940815 containerd[1620]: 2026-04-13 19:28:14.754 [INFO][4272] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.20.0/26 handle="k8s-pod-network.ac9ff70f3ca24659a7f8ff991e226ca45e68357a59c8869235e63fbbb87eece4" host="ci-4081-3-7-8-01d4258341" Apr 13 19:28:14.940815 containerd[1620]: 2026-04-13 19:28:14.759 [INFO][4272] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.ac9ff70f3ca24659a7f8ff991e226ca45e68357a59c8869235e63fbbb87eece4 Apr 13 19:28:14.940815 containerd[1620]: 2026-04-13 19:28:14.768 [INFO][4272] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.20.0/26 handle="k8s-pod-network.ac9ff70f3ca24659a7f8ff991e226ca45e68357a59c8869235e63fbbb87eece4" host="ci-4081-3-7-8-01d4258341" Apr 13 19:28:14.940815 containerd[1620]: 2026-04-13 19:28:14.785 [INFO][4272] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.20.7/26] block=192.168.20.0/26 handle="k8s-pod-network.ac9ff70f3ca24659a7f8ff991e226ca45e68357a59c8869235e63fbbb87eece4" host="ci-4081-3-7-8-01d4258341" Apr 13 19:28:14.940815 containerd[1620]: 2026-04-13 19:28:14.785 [INFO][4272] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.20.7/26] handle="k8s-pod-network.ac9ff70f3ca24659a7f8ff991e226ca45e68357a59c8869235e63fbbb87eece4" host="ci-4081-3-7-8-01d4258341" Apr 13 19:28:14.940815 containerd[1620]: 2026-04-13 19:28:14.785 [INFO][4272] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 19:28:14.940815 containerd[1620]: 2026-04-13 19:28:14.785 [INFO][4272] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.20.7/26] IPv6=[] ContainerID="ac9ff70f3ca24659a7f8ff991e226ca45e68357a59c8869235e63fbbb87eece4" HandleID="k8s-pod-network.ac9ff70f3ca24659a7f8ff991e226ca45e68357a59c8869235e63fbbb87eece4" Workload="ci--4081--3--7--8--01d4258341-k8s-csi--node--driver--xdt82-eth0" Apr 13 19:28:14.941684 containerd[1620]: 2026-04-13 19:28:14.809 [INFO][4167] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ac9ff70f3ca24659a7f8ff991e226ca45e68357a59c8869235e63fbbb87eece4" Namespace="calico-system" Pod="csi-node-driver-xdt82" WorkloadEndpoint="ci--4081--3--7--8--01d4258341-k8s-csi--node--driver--xdt82-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--8--01d4258341-k8s-csi--node--driver--xdt82-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"cb288a13-6f3a-4bf1-9058-ef9f893933ae", ResourceVersion:"878", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 19, 27, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-8-01d4258341", ContainerID:"", Pod:"csi-node-driver-xdt82", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.20.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"califd3e778afe9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 19:28:14.941684 containerd[1620]: 2026-04-13 19:28:14.809 [INFO][4167] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.20.7/32] ContainerID="ac9ff70f3ca24659a7f8ff991e226ca45e68357a59c8869235e63fbbb87eece4" Namespace="calico-system" Pod="csi-node-driver-xdt82" WorkloadEndpoint="ci--4081--3--7--8--01d4258341-k8s-csi--node--driver--xdt82-eth0" Apr 13 19:28:14.941684 containerd[1620]: 2026-04-13 19:28:14.809 [INFO][4167] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califd3e778afe9 ContainerID="ac9ff70f3ca24659a7f8ff991e226ca45e68357a59c8869235e63fbbb87eece4" Namespace="calico-system" Pod="csi-node-driver-xdt82" WorkloadEndpoint="ci--4081--3--7--8--01d4258341-k8s-csi--node--driver--xdt82-eth0" Apr 13 19:28:14.941684 containerd[1620]: 2026-04-13 19:28:14.886 [INFO][4167] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ac9ff70f3ca24659a7f8ff991e226ca45e68357a59c8869235e63fbbb87eece4" Namespace="calico-system" Pod="csi-node-driver-xdt82" WorkloadEndpoint="ci--4081--3--7--8--01d4258341-k8s-csi--node--driver--xdt82-eth0" Apr 13 19:28:14.941684 containerd[1620]: 2026-04-13 19:28:14.888 [INFO][4167] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ac9ff70f3ca24659a7f8ff991e226ca45e68357a59c8869235e63fbbb87eece4" Namespace="calico-system" Pod="csi-node-driver-xdt82" WorkloadEndpoint="ci--4081--3--7--8--01d4258341-k8s-csi--node--driver--xdt82-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--8--01d4258341-k8s-csi--node--driver--xdt82-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"cb288a13-6f3a-4bf1-9058-ef9f893933ae", ResourceVersion:"878", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 19, 27, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-8-01d4258341", ContainerID:"ac9ff70f3ca24659a7f8ff991e226ca45e68357a59c8869235e63fbbb87eece4", Pod:"csi-node-driver-xdt82", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.20.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"califd3e778afe9", MAC:"f6:51:19:06:5f:5d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 19:28:14.941684 containerd[1620]: 2026-04-13 19:28:14.913 [INFO][4167] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ac9ff70f3ca24659a7f8ff991e226ca45e68357a59c8869235e63fbbb87eece4" Namespace="calico-system" Pod="csi-node-driver-xdt82" WorkloadEndpoint="ci--4081--3--7--8--01d4258341-k8s-csi--node--driver--xdt82-eth0" Apr 13 19:28:14.984103 containerd[1620]: time="2026-04-13T19:28:14.983450550Z" level=info msg="StartContainer for \"0c357712fc42e632cfa1dd1e12e38d87415d7441e71b5c1ba433f0bb69271d97\" returns successfully" Apr 13 19:28:14.989840 containerd[1620]: time="2026-04-13T19:28:14.989413494Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 19:28:14.989840 containerd[1620]: time="2026-04-13T19:28:14.989790491Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 19:28:14.990535 containerd[1620]: time="2026-04-13T19:28:14.990145446Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:28:14.993751 containerd[1620]: time="2026-04-13T19:28:14.993458210Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:28:15.019161 systemd-networkd[1251]: cali95fa93601a8: Link UP Apr 13 19:28:15.020135 systemd-networkd[1251]: cali95fa93601a8: Gained carrier Apr 13 19:28:15.063962 systemd-journald[1159]: Under memory pressure, flushing caches. Apr 13 19:28:15.065041 systemd-resolved[1489]: Under memory pressure, flushing caches. Apr 13 19:28:15.065100 systemd-resolved[1489]: Flushed all caches. Apr 13 19:28:15.097862 containerd[1620]: time="2026-04-13T19:28:15.097274130Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 19:28:15.097862 containerd[1620]: time="2026-04-13T19:28:15.097444506Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 19:28:15.097862 containerd[1620]: time="2026-04-13T19:28:15.097461268Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:28:15.097862 containerd[1620]: time="2026-04-13T19:28:15.097679449Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:28:15.125459 systemd-networkd[1251]: vxlan.calico: Link UP Apr 13 19:28:15.125473 systemd-networkd[1251]: vxlan.calico: Gained carrier Apr 13 19:28:15.136189 containerd[1620]: 2026-04-13 19:28:14.443 [INFO][4317] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--7--8--01d4258341-k8s-whisker--676675864c--cv674-eth0 whisker-676675864c- calico-system 083a7fe5-365a-46b9-ad91-6da2ea0f91ea 900 0 2026-04-13 19:28:13 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:676675864c projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4081-3-7-8-01d4258341 whisker-676675864c-cv674 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali95fa93601a8 [] [] }} ContainerID="ce5d2c7ed91741c15ecd2e498d64e30c78f783a9290d9fec85eecdfcdd9e0821" Namespace="calico-system" Pod="whisker-676675864c-cv674" WorkloadEndpoint="ci--4081--3--7--8--01d4258341-k8s-whisker--676675864c--cv674-" Apr 13 19:28:15.136189 containerd[1620]: 2026-04-13 19:28:14.445 [INFO][4317] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ce5d2c7ed91741c15ecd2e498d64e30c78f783a9290d9fec85eecdfcdd9e0821" Namespace="calico-system" Pod="whisker-676675864c-cv674" WorkloadEndpoint="ci--4081--3--7--8--01d4258341-k8s-whisker--676675864c--cv674-eth0" Apr 13 19:28:15.136189 containerd[1620]: 2026-04-13 19:28:14.627 [INFO][4470] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ce5d2c7ed91741c15ecd2e498d64e30c78f783a9290d9fec85eecdfcdd9e0821" HandleID="k8s-pod-network.ce5d2c7ed91741c15ecd2e498d64e30c78f783a9290d9fec85eecdfcdd9e0821" Workload="ci--4081--3--7--8--01d4258341-k8s-whisker--676675864c--cv674-eth0" Apr 13 19:28:15.136189 containerd[1620]: 2026-04-13 19:28:14.724 [INFO][4470] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="ce5d2c7ed91741c15ecd2e498d64e30c78f783a9290d9fec85eecdfcdd9e0821" HandleID="k8s-pod-network.ce5d2c7ed91741c15ecd2e498d64e30c78f783a9290d9fec85eecdfcdd9e0821" Workload="ci--4081--3--7--8--01d4258341-k8s-whisker--676675864c--cv674-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400037e670), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-7-8-01d4258341", "pod":"whisker-676675864c-cv674", "timestamp":"2026-04-13 19:28:14.627258147 +0000 UTC"}, Hostname:"ci-4081-3-7-8-01d4258341", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40000ce6e0)} Apr 13 19:28:15.136189 containerd[1620]: 2026-04-13 19:28:14.724 [INFO][4470] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 19:28:15.136189 containerd[1620]: 2026-04-13 19:28:14.786 [INFO][4470] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 19:28:15.136189 containerd[1620]: 2026-04-13 19:28:14.787 [INFO][4470] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-7-8-01d4258341' Apr 13 19:28:15.136189 containerd[1620]: 2026-04-13 19:28:14.798 [INFO][4470] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.ce5d2c7ed91741c15ecd2e498d64e30c78f783a9290d9fec85eecdfcdd9e0821" host="ci-4081-3-7-8-01d4258341" Apr 13 19:28:15.136189 containerd[1620]: 2026-04-13 19:28:14.814 [INFO][4470] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081-3-7-8-01d4258341" Apr 13 19:28:15.136189 containerd[1620]: 2026-04-13 19:28:14.851 [INFO][4470] ipam/ipam.go 526: Trying affinity for 192.168.20.0/26 host="ci-4081-3-7-8-01d4258341" Apr 13 19:28:15.136189 containerd[1620]: 2026-04-13 19:28:14.863 [INFO][4470] ipam/ipam.go 160: Attempting to load block cidr=192.168.20.0/26 host="ci-4081-3-7-8-01d4258341" Apr 13 19:28:15.136189 containerd[1620]: 2026-04-13 19:28:14.882 [INFO][4470] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.20.0/26 host="ci-4081-3-7-8-01d4258341" Apr 13 19:28:15.136189 containerd[1620]: 2026-04-13 19:28:14.882 [INFO][4470] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.20.0/26 handle="k8s-pod-network.ce5d2c7ed91741c15ecd2e498d64e30c78f783a9290d9fec85eecdfcdd9e0821" host="ci-4081-3-7-8-01d4258341" Apr 13 19:28:15.136189 containerd[1620]: 2026-04-13 19:28:14.888 [INFO][4470] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.ce5d2c7ed91741c15ecd2e498d64e30c78f783a9290d9fec85eecdfcdd9e0821 Apr 13 19:28:15.136189 containerd[1620]: 2026-04-13 19:28:14.916 [INFO][4470] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.20.0/26 handle="k8s-pod-network.ce5d2c7ed91741c15ecd2e498d64e30c78f783a9290d9fec85eecdfcdd9e0821" host="ci-4081-3-7-8-01d4258341" Apr 13 19:28:15.136189 containerd[1620]: 2026-04-13 19:28:14.939 [INFO][4470] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.20.8/26] block=192.168.20.0/26 handle="k8s-pod-network.ce5d2c7ed91741c15ecd2e498d64e30c78f783a9290d9fec85eecdfcdd9e0821" host="ci-4081-3-7-8-01d4258341" Apr 13 19:28:15.136189 containerd[1620]: 2026-04-13 19:28:14.942 [INFO][4470] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.20.8/26] handle="k8s-pod-network.ce5d2c7ed91741c15ecd2e498d64e30c78f783a9290d9fec85eecdfcdd9e0821" host="ci-4081-3-7-8-01d4258341" Apr 13 19:28:15.136189 containerd[1620]: 2026-04-13 19:28:14.946 [INFO][4470] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 19:28:15.136189 containerd[1620]: 2026-04-13 19:28:14.947 [INFO][4470] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.20.8/26] IPv6=[] ContainerID="ce5d2c7ed91741c15ecd2e498d64e30c78f783a9290d9fec85eecdfcdd9e0821" HandleID="k8s-pod-network.ce5d2c7ed91741c15ecd2e498d64e30c78f783a9290d9fec85eecdfcdd9e0821" Workload="ci--4081--3--7--8--01d4258341-k8s-whisker--676675864c--cv674-eth0" Apr 13 19:28:15.136795 containerd[1620]: 2026-04-13 19:28:14.984 [INFO][4317] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ce5d2c7ed91741c15ecd2e498d64e30c78f783a9290d9fec85eecdfcdd9e0821" Namespace="calico-system" Pod="whisker-676675864c-cv674" WorkloadEndpoint="ci--4081--3--7--8--01d4258341-k8s-whisker--676675864c--cv674-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--8--01d4258341-k8s-whisker--676675864c--cv674-eth0", GenerateName:"whisker-676675864c-", Namespace:"calico-system", SelfLink:"", UID:"083a7fe5-365a-46b9-ad91-6da2ea0f91ea", ResourceVersion:"900", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 19, 28, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"676675864c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-8-01d4258341", ContainerID:"", Pod:"whisker-676675864c-cv674", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.20.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali95fa93601a8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 19:28:15.136795 containerd[1620]: 2026-04-13 19:28:14.987 [INFO][4317] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.20.8/32] ContainerID="ce5d2c7ed91741c15ecd2e498d64e30c78f783a9290d9fec85eecdfcdd9e0821" Namespace="calico-system" Pod="whisker-676675864c-cv674" WorkloadEndpoint="ci--4081--3--7--8--01d4258341-k8s-whisker--676675864c--cv674-eth0" Apr 13 19:28:15.136795 containerd[1620]: 2026-04-13 19:28:14.987 [INFO][4317] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali95fa93601a8 ContainerID="ce5d2c7ed91741c15ecd2e498d64e30c78f783a9290d9fec85eecdfcdd9e0821" Namespace="calico-system" Pod="whisker-676675864c-cv674" WorkloadEndpoint="ci--4081--3--7--8--01d4258341-k8s-whisker--676675864c--cv674-eth0" Apr 13 19:28:15.136795 containerd[1620]: 2026-04-13 19:28:15.044 [INFO][4317] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ce5d2c7ed91741c15ecd2e498d64e30c78f783a9290d9fec85eecdfcdd9e0821" Namespace="calico-system" Pod="whisker-676675864c-cv674" WorkloadEndpoint="ci--4081--3--7--8--01d4258341-k8s-whisker--676675864c--cv674-eth0" Apr 13 19:28:15.136795 containerd[1620]: 2026-04-13 19:28:15.084 [INFO][4317] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ce5d2c7ed91741c15ecd2e498d64e30c78f783a9290d9fec85eecdfcdd9e0821" Namespace="calico-system" Pod="whisker-676675864c-cv674" WorkloadEndpoint="ci--4081--3--7--8--01d4258341-k8s-whisker--676675864c--cv674-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--8--01d4258341-k8s-whisker--676675864c--cv674-eth0", GenerateName:"whisker-676675864c-", Namespace:"calico-system", SelfLink:"", UID:"083a7fe5-365a-46b9-ad91-6da2ea0f91ea", ResourceVersion:"900", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 19, 28, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"676675864c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-8-01d4258341", ContainerID:"ce5d2c7ed91741c15ecd2e498d64e30c78f783a9290d9fec85eecdfcdd9e0821", Pod:"whisker-676675864c-cv674", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.20.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali95fa93601a8", MAC:"26:4f:79:4c:d0:35", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 19:28:15.136795 containerd[1620]: 2026-04-13 19:28:15.110 [INFO][4317] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ce5d2c7ed91741c15ecd2e498d64e30c78f783a9290d9fec85eecdfcdd9e0821" Namespace="calico-system" Pod="whisker-676675864c-cv674" WorkloadEndpoint="ci--4081--3--7--8--01d4258341-k8s-whisker--676675864c--cv674-eth0" Apr 13 19:28:15.229380 containerd[1620]: time="2026-04-13T19:28:15.228261328Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-669bfd5bbc-9897g,Uid:8dfd1508-5560-4545-9951-14512e76963d,Namespace:calico-system,Attempt:1,} returns sandbox id \"64a94d408824bf7ab8749586db221cf9f7036b2d623c5d5f79bfff2a8891b7f1\"" Apr 13 19:28:15.252086 systemd-networkd[1251]: cali4b28647f000: Gained IPv6LL Apr 13 19:28:15.365129 containerd[1620]: time="2026-04-13T19:28:15.353725440Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 19:28:15.365129 containerd[1620]: time="2026-04-13T19:28:15.353784126Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 19:28:15.365129 containerd[1620]: time="2026-04-13T19:28:15.353799607Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:28:15.365129 containerd[1620]: time="2026-04-13T19:28:15.355336793Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:28:15.380738 containerd[1620]: time="2026-04-13T19:28:15.368184053Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-7vqgk,Uid:8159562d-f09d-4214-b84d-626c5db8278b,Namespace:calico-system,Attempt:1,} returns sandbox id \"a51f4062842df027cb8905ea2955c3f595feafbb965e746cc05fddff3a6796f7\"" Apr 13 19:28:15.405435 containerd[1620]: time="2026-04-13T19:28:15.405349782Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-7z59s,Uid:dd51994b-5cb8-4cf8-81b2-1ad58037a2fe,Namespace:kube-system,Attempt:1,} returns sandbox id \"f015d5ec4e7308ddaca9bc5a9cfd277d1bf6294e8d9e52bda3c4feccd27219ce\"" Apr 13 19:28:15.408190 containerd[1620]: time="2026-04-13T19:28:15.408153088Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xdt82,Uid:cb288a13-6f3a-4bf1-9058-ef9f893933ae,Namespace:calico-system,Attempt:1,} returns sandbox id \"ac9ff70f3ca24659a7f8ff991e226ca45e68357a59c8869235e63fbbb87eece4\"" Apr 13 19:28:15.418275 containerd[1620]: time="2026-04-13T19:28:15.418216964Z" level=info msg="CreateContainer within sandbox \"f015d5ec4e7308ddaca9bc5a9cfd277d1bf6294e8d9e52bda3c4feccd27219ce\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 13 19:28:15.455136 containerd[1620]: time="2026-04-13T19:28:15.454789237Z" level=info msg="CreateContainer within sandbox \"f015d5ec4e7308ddaca9bc5a9cfd277d1bf6294e8d9e52bda3c4feccd27219ce\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"69711a1f960a9a12e5a5fc48c406b31eb09c6c1b46b89b4039ba91b1c69e48d9\"" Apr 13 19:28:15.462208 containerd[1620]: time="2026-04-13T19:28:15.462148455Z" level=info msg="StartContainer for \"69711a1f960a9a12e5a5fc48c406b31eb09c6c1b46b89b4039ba91b1c69e48d9\"" Apr 13 19:28:15.474010 kubelet[2698]: I0413 19:28:15.473755 2698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-ft6d4" podStartSLOduration=32.473735796 podStartE2EDuration="32.473735796s" podCreationTimestamp="2026-04-13 19:27:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-13 19:28:15.47199479 +0000 UTC m=+37.517321760" watchObservedRunningTime="2026-04-13 19:28:15.473735796 +0000 UTC m=+37.519062726" Apr 13 19:28:15.506479 systemd-networkd[1251]: calidea0326cbf5: Gained IPv6LL Apr 13 19:28:15.562116 containerd[1620]: time="2026-04-13T19:28:15.561996576Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-676675864c-cv674,Uid:083a7fe5-365a-46b9-ad91-6da2ea0f91ea,Namespace:calico-system,Attempt:0,} returns sandbox id \"ce5d2c7ed91741c15ecd2e498d64e30c78f783a9290d9fec85eecdfcdd9e0821\"" Apr 13 19:28:15.604570 containerd[1620]: time="2026-04-13T19:28:15.604223225Z" level=info msg="StartContainer for \"69711a1f960a9a12e5a5fc48c406b31eb09c6c1b46b89b4039ba91b1c69e48d9\" returns successfully" Apr 13 19:28:15.826180 systemd-networkd[1251]: cali5aa06af23e8: Gained IPv6LL Apr 13 19:28:16.082325 systemd-networkd[1251]: calibef3683f27e: Gained IPv6LL Apr 13 19:28:16.210533 systemd-networkd[1251]: caliab859c802f1: Gained IPv6LL Apr 13 19:28:16.274986 systemd-networkd[1251]: cali95fa93601a8: Gained IPv6LL Apr 13 19:28:16.466116 systemd-networkd[1251]: califd3e778afe9: Gained IPv6LL Apr 13 19:28:16.557877 kubelet[2698]: I0413 19:28:16.557390 2698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-7z59s" podStartSLOduration=33.557365811 podStartE2EDuration="33.557365811s" podCreationTimestamp="2026-04-13 19:27:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-13 19:28:16.552857395 +0000 UTC m=+38.598184325" watchObservedRunningTime="2026-04-13 19:28:16.557365811 +0000 UTC m=+38.602692741" Apr 13 19:28:17.106361 systemd-networkd[1251]: vxlan.calico: Gained IPv6LL Apr 13 19:28:17.609717 containerd[1620]: time="2026-04-13T19:28:17.609667749Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:28:17.611917 containerd[1620]: time="2026-04-13T19:28:17.611883148Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=45552315" Apr 13 19:28:17.613275 containerd[1620]: time="2026-04-13T19:28:17.613208706Z" level=info msg="ImageCreate event name:\"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:28:17.616308 containerd[1620]: time="2026-04-13T19:28:17.616214855Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:28:17.619417 containerd[1620]: time="2026-04-13T19:28:17.619377138Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"46949856\" in 3.059378739s" Apr 13 19:28:17.619636 containerd[1620]: time="2026-04-13T19:28:17.619521311Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\"" Apr 13 19:28:17.623922 containerd[1620]: time="2026-04-13T19:28:17.621189061Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Apr 13 19:28:17.627104 containerd[1620]: time="2026-04-13T19:28:17.626988100Z" level=info msg="CreateContainer within sandbox \"cadde1ec1ceb3caf5ad550f8c515eedf7597dcfa16c10f050b7c4f1359fe8c8e\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 13 19:28:17.650776 containerd[1620]: time="2026-04-13T19:28:17.650729186Z" level=info msg="CreateContainer within sandbox \"cadde1ec1ceb3caf5ad550f8c515eedf7597dcfa16c10f050b7c4f1359fe8c8e\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"80f683a2642c75c6c9cbd598b4f340d329b976f69df98f0409aba909eda2dff2\"" Apr 13 19:28:17.653414 containerd[1620]: time="2026-04-13T19:28:17.653347300Z" level=info msg="StartContainer for \"80f683a2642c75c6c9cbd598b4f340d329b976f69df98f0409aba909eda2dff2\"" Apr 13 19:28:17.728501 containerd[1620]: time="2026-04-13T19:28:17.727336404Z" level=info msg="StartContainer for \"80f683a2642c75c6c9cbd598b4f340d329b976f69df98f0409aba909eda2dff2\" returns successfully" Apr 13 19:28:18.035120 containerd[1620]: time="2026-04-13T19:28:18.035065192Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:28:18.036670 containerd[1620]: time="2026-04-13T19:28:18.036633609Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=77" Apr 13 19:28:18.038943 containerd[1620]: time="2026-04-13T19:28:18.038897166Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"46949856\" in 417.671302ms" Apr 13 19:28:18.039025 containerd[1620]: time="2026-04-13T19:28:18.038977853Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\"" Apr 13 19:28:18.040316 containerd[1620]: time="2026-04-13T19:28:18.040289487Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\"" Apr 13 19:28:18.044165 containerd[1620]: time="2026-04-13T19:28:18.044121341Z" level=info msg="CreateContainer within sandbox \"ae703df4a9502b5f305544be44901438bd0076121c1d3c73c39f103ea34851e0\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 13 19:28:18.065891 containerd[1620]: time="2026-04-13T19:28:18.065836831Z" level=info msg="CreateContainer within sandbox \"ae703df4a9502b5f305544be44901438bd0076121c1d3c73c39f103ea34851e0\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"5657f5714c642be10ecc5a699b9d2cf7aadcf4e09f82bbabc6ae282365cd5b30\"" Apr 13 19:28:18.069943 containerd[1620]: time="2026-04-13T19:28:18.068109509Z" level=info msg="StartContainer for \"5657f5714c642be10ecc5a699b9d2cf7aadcf4e09f82bbabc6ae282365cd5b30\"" Apr 13 19:28:18.152254 containerd[1620]: time="2026-04-13T19:28:18.152198231Z" level=info msg="StartContainer for \"5657f5714c642be10ecc5a699b9d2cf7aadcf4e09f82bbabc6ae282365cd5b30\" returns successfully" Apr 13 19:28:18.562488 kubelet[2698]: I0413 19:28:18.562414 2698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-7885d7c4d8-s4bvw" podStartSLOduration=19.487326965 podStartE2EDuration="22.562399229s" podCreationTimestamp="2026-04-13 19:27:56 +0000 UTC" firstStartedPulling="2026-04-13 19:28:14.545671077 +0000 UTC m=+36.590998007" lastFinishedPulling="2026-04-13 19:28:17.620743381 +0000 UTC m=+39.666070271" observedRunningTime="2026-04-13 19:28:18.562083642 +0000 UTC m=+40.607410532" watchObservedRunningTime="2026-04-13 19:28:18.562399229 +0000 UTC m=+40.607726119" Apr 13 19:28:18.591192 kubelet[2698]: I0413 19:28:18.591050 2698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-7885d7c4d8-kjddq" podStartSLOduration=19.416106734 podStartE2EDuration="22.591029642s" podCreationTimestamp="2026-04-13 19:27:56 +0000 UTC" firstStartedPulling="2026-04-13 19:28:14.864728723 +0000 UTC m=+36.910055653" lastFinishedPulling="2026-04-13 19:28:18.039651631 +0000 UTC m=+40.084978561" observedRunningTime="2026-04-13 19:28:18.590179248 +0000 UTC m=+40.635506178" watchObservedRunningTime="2026-04-13 19:28:18.591029642 +0000 UTC m=+40.636356692" Apr 13 19:28:19.555077 kubelet[2698]: I0413 19:28:19.554275 2698 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 13 19:28:19.555077 kubelet[2698]: I0413 19:28:19.554330 2698 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 13 19:28:21.238975 containerd[1620]: time="2026-04-13T19:28:21.238555421Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:28:21.240010 containerd[1620]: time="2026-04-13T19:28:21.239757038Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.31.4: active requests=0, bytes read=49189955" Apr 13 19:28:21.241971 containerd[1620]: time="2026-04-13T19:28:21.241079545Z" level=info msg="ImageCreate event name:\"sha256:e80fe1ce4f06b0791c077492cd9d5ebf00125a02bbafdcd04d2a64e10cc4ad95\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:28:21.243857 containerd[1620]: time="2026-04-13T19:28:21.243748840Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:28:21.245358 containerd[1620]: time="2026-04-13T19:28:21.244267642Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" with image id \"sha256:e80fe1ce4f06b0791c077492cd9d5ebf00125a02bbafdcd04d2a64e10cc4ad95\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\", size \"50587448\" in 3.203945192s" Apr 13 19:28:21.245358 containerd[1620]: time="2026-04-13T19:28:21.244306205Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" returns image reference \"sha256:e80fe1ce4f06b0791c077492cd9d5ebf00125a02bbafdcd04d2a64e10cc4ad95\"" Apr 13 19:28:21.247622 containerd[1620]: time="2026-04-13T19:28:21.246300005Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\"" Apr 13 19:28:21.265152 containerd[1620]: time="2026-04-13T19:28:21.265107401Z" level=info msg="CreateContainer within sandbox \"64a94d408824bf7ab8749586db221cf9f7036b2d623c5d5f79bfff2a8891b7f1\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Apr 13 19:28:21.283345 containerd[1620]: time="2026-04-13T19:28:21.283227381Z" level=info msg="CreateContainer within sandbox \"64a94d408824bf7ab8749586db221cf9f7036b2d623c5d5f79bfff2a8891b7f1\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"680aabcd56ed783064a4f781ce7a232a3a2e240942007d0727fe540ac8145f1b\"" Apr 13 19:28:21.285959 containerd[1620]: time="2026-04-13T19:28:21.284382434Z" level=info msg="StartContainer for \"680aabcd56ed783064a4f781ce7a232a3a2e240942007d0727fe540ac8145f1b\"" Apr 13 19:28:21.390995 containerd[1620]: time="2026-04-13T19:28:21.389468462Z" level=info msg="StartContainer for \"680aabcd56ed783064a4f781ce7a232a3a2e240942007d0727fe540ac8145f1b\" returns successfully" Apr 13 19:28:21.589781 kubelet[2698]: I0413 19:28:21.589528 2698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-669bfd5bbc-9897g" podStartSLOduration=18.584717158 podStartE2EDuration="24.589448858s" podCreationTimestamp="2026-04-13 19:27:57 +0000 UTC" firstStartedPulling="2026-04-13 19:28:15.241163313 +0000 UTC m=+37.286490283" lastFinishedPulling="2026-04-13 19:28:21.245895093 +0000 UTC m=+43.291221983" observedRunningTime="2026-04-13 19:28:21.588561066 +0000 UTC m=+43.633887996" watchObservedRunningTime="2026-04-13 19:28:21.589448858 +0000 UTC m=+43.634775788" Apr 13 19:28:23.590646 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1958618899.mount: Deactivated successfully. Apr 13 19:28:23.901360 containerd[1620]: time="2026-04-13T19:28:23.900171376Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:28:23.902983 containerd[1620]: time="2026-04-13T19:28:23.902931468Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.31.4: active requests=0, bytes read=51613980" Apr 13 19:28:23.904227 containerd[1620]: time="2026-04-13T19:28:23.904171843Z" level=info msg="ImageCreate event name:\"sha256:5274e98e9b12badfa0d6f106814630212e6de7abb8deaf896423b13e6ebdb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:28:23.907569 containerd[1620]: time="2026-04-13T19:28:23.907540902Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:28:23.908319 containerd[1620]: time="2026-04-13T19:28:23.908281639Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" with image id \"sha256:5274e98e9b12badfa0d6f106814630212e6de7abb8deaf896423b13e6ebdb41b\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\", size \"51613826\" in 2.661946712s" Apr 13 19:28:23.908390 containerd[1620]: time="2026-04-13T19:28:23.908319362Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" returns image reference \"sha256:5274e98e9b12badfa0d6f106814630212e6de7abb8deaf896423b13e6ebdb41b\"" Apr 13 19:28:23.910551 containerd[1620]: time="2026-04-13T19:28:23.910162384Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\"" Apr 13 19:28:23.915219 containerd[1620]: time="2026-04-13T19:28:23.915179130Z" level=info msg="CreateContainer within sandbox \"a51f4062842df027cb8905ea2955c3f595feafbb965e746cc05fddff3a6796f7\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Apr 13 19:28:23.933257 containerd[1620]: time="2026-04-13T19:28:23.933172914Z" level=info msg="CreateContainer within sandbox \"a51f4062842df027cb8905ea2955c3f595feafbb965e746cc05fddff3a6796f7\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"1d5f86dad44db76b1b6ad1d02dc1401506cd67996499ff832bdd4cfd908b5bad\"" Apr 13 19:28:23.936437 containerd[1620]: time="2026-04-13T19:28:23.936331716Z" level=info msg="StartContainer for \"1d5f86dad44db76b1b6ad1d02dc1401506cd67996499ff832bdd4cfd908b5bad\"" Apr 13 19:28:24.010865 containerd[1620]: time="2026-04-13T19:28:24.010617493Z" level=info msg="StartContainer for \"1d5f86dad44db76b1b6ad1d02dc1401506cd67996499ff832bdd4cfd908b5bad\" returns successfully" Apr 13 19:28:25.541627 containerd[1620]: time="2026-04-13T19:28:25.539799299Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:28:25.541627 containerd[1620]: time="2026-04-13T19:28:25.541483063Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.31.4: active requests=0, bytes read=8261497" Apr 13 19:28:25.542516 containerd[1620]: time="2026-04-13T19:28:25.542461735Z" level=info msg="ImageCreate event name:\"sha256:9cb4086a1b408b52c6b14e0b81520060e1766ee0243508d29d8a53c7b518051f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:28:25.546628 containerd[1620]: time="2026-04-13T19:28:25.546595520Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:28:25.547462 containerd[1620]: time="2026-04-13T19:28:25.547430622Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.31.4\" with image id \"sha256:9cb4086a1b408b52c6b14e0b81520060e1766ee0243508d29d8a53c7b518051f\", repo tag \"ghcr.io/flatcar/calico/csi:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\", size \"9659022\" in 1.637212634s" Apr 13 19:28:25.547570 containerd[1620]: time="2026-04-13T19:28:25.547554791Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\" returns image reference \"sha256:9cb4086a1b408b52c6b14e0b81520060e1766ee0243508d29d8a53c7b518051f\"" Apr 13 19:28:25.550740 containerd[1620]: time="2026-04-13T19:28:25.550489407Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\"" Apr 13 19:28:25.554081 containerd[1620]: time="2026-04-13T19:28:25.554045629Z" level=info msg="CreateContainer within sandbox \"ac9ff70f3ca24659a7f8ff991e226ca45e68357a59c8869235e63fbbb87eece4\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Apr 13 19:28:25.580020 containerd[1620]: time="2026-04-13T19:28:25.576966437Z" level=info msg="CreateContainer within sandbox \"ac9ff70f3ca24659a7f8ff991e226ca45e68357a59c8869235e63fbbb87eece4\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"ffbe177f94213aa4ede57dc51afca5ce72c24601d8a814bdd5c7068c8f4e356b\"" Apr 13 19:28:25.580020 containerd[1620]: time="2026-04-13T19:28:25.578589837Z" level=info msg="StartContainer for \"ffbe177f94213aa4ede57dc51afca5ce72c24601d8a814bdd5c7068c8f4e356b\"" Apr 13 19:28:25.665034 containerd[1620]: time="2026-04-13T19:28:25.664817990Z" level=info msg="StartContainer for \"ffbe177f94213aa4ede57dc51afca5ce72c24601d8a814bdd5c7068c8f4e356b\" returns successfully" Apr 13 19:28:27.274763 containerd[1620]: time="2026-04-13T19:28:27.274656223Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.31.4: active requests=0, bytes read=5882804" Apr 13 19:28:27.276950 containerd[1620]: time="2026-04-13T19:28:27.276448150Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:28:27.278586 containerd[1620]: time="2026-04-13T19:28:27.278546298Z" level=info msg="ImageCreate event name:\"sha256:51af4e9dcdb93e51b26a4a6f99272ec2df8de1aef256bb746f2c7c844b8e7b2c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:28:27.279941 containerd[1620]: time="2026-04-13T19:28:27.279895474Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:28:27.281037 containerd[1620]: time="2026-04-13T19:28:27.280994832Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.31.4\" with image id \"sha256:51af4e9dcdb93e51b26a4a6f99272ec2df8de1aef256bb746f2c7c844b8e7b2c\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\", size \"7280321\" in 1.730459582s" Apr 13 19:28:27.281819 containerd[1620]: time="2026-04-13T19:28:27.281746445Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\" returns image reference \"sha256:51af4e9dcdb93e51b26a4a6f99272ec2df8de1aef256bb746f2c7c844b8e7b2c\"" Apr 13 19:28:27.284905 containerd[1620]: time="2026-04-13T19:28:27.283688823Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\"" Apr 13 19:28:27.287592 containerd[1620]: time="2026-04-13T19:28:27.287546856Z" level=info msg="CreateContainer within sandbox \"ce5d2c7ed91741c15ecd2e498d64e30c78f783a9290d9fec85eecdfcdd9e0821\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Apr 13 19:28:27.312263 containerd[1620]: time="2026-04-13T19:28:27.312214043Z" level=info msg="CreateContainer within sandbox \"ce5d2c7ed91741c15ecd2e498d64e30c78f783a9290d9fec85eecdfcdd9e0821\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"07676588fdce0f7322a37d68a336f86040967f0ec0543530b49aebbf598a4173\"" Apr 13 19:28:27.314272 containerd[1620]: time="2026-04-13T19:28:27.313398007Z" level=info msg="StartContainer for \"07676588fdce0f7322a37d68a336f86040967f0ec0543530b49aebbf598a4173\"" Apr 13 19:28:27.351801 systemd[1]: run-containerd-runc-k8s.io-07676588fdce0f7322a37d68a336f86040967f0ec0543530b49aebbf598a4173-runc.oUjqJc.mount: Deactivated successfully. Apr 13 19:28:27.393424 containerd[1620]: time="2026-04-13T19:28:27.393367911Z" level=info msg="StartContainer for \"07676588fdce0f7322a37d68a336f86040967f0ec0543530b49aebbf598a4173\" returns successfully" Apr 13 19:28:28.931099 containerd[1620]: time="2026-04-13T19:28:28.931012703Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:28:28.932978 containerd[1620]: time="2026-04-13T19:28:28.932910635Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4: active requests=0, bytes read=13766291" Apr 13 19:28:28.934342 containerd[1620]: time="2026-04-13T19:28:28.934295691Z" level=info msg="ImageCreate event name:\"sha256:8195c49a3b504e7ef58a8fc9a0e9ae66ae6ae90ef4998c04591be9588e8fa07e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:28:28.939207 containerd[1620]: time="2026-04-13T19:28:28.939109186Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:28:28.942357 containerd[1620]: time="2026-04-13T19:28:28.942287967Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" with image id \"sha256:8195c49a3b504e7ef58a8fc9a0e9ae66ae6ae90ef4998c04591be9588e8fa07e\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\", size \"15163768\" in 1.65853354s" Apr 13 19:28:28.942617 containerd[1620]: time="2026-04-13T19:28:28.942364252Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" returns image reference \"sha256:8195c49a3b504e7ef58a8fc9a0e9ae66ae6ae90ef4998c04591be9588e8fa07e\"" Apr 13 19:28:28.954304 containerd[1620]: time="2026-04-13T19:28:28.953896654Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\"" Apr 13 19:28:28.960239 containerd[1620]: time="2026-04-13T19:28:28.960198172Z" level=info msg="CreateContainer within sandbox \"ac9ff70f3ca24659a7f8ff991e226ca45e68357a59c8869235e63fbbb87eece4\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Apr 13 19:28:28.980171 containerd[1620]: time="2026-04-13T19:28:28.980033072Z" level=info msg="CreateContainer within sandbox \"ac9ff70f3ca24659a7f8ff991e226ca45e68357a59c8869235e63fbbb87eece4\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"6619a443b6348a232a7e963fefac919f6bb0323df0823213e9fc889577b436ae\"" Apr 13 19:28:28.982297 containerd[1620]: time="2026-04-13T19:28:28.982262627Z" level=info msg="StartContainer for \"6619a443b6348a232a7e963fefac919f6bb0323df0823213e9fc889577b436ae\"" Apr 13 19:28:29.058756 containerd[1620]: time="2026-04-13T19:28:29.058532941Z" level=info msg="StartContainer for \"6619a443b6348a232a7e963fefac919f6bb0323df0823213e9fc889577b436ae\" returns successfully" Apr 13 19:28:29.211290 kubelet[2698]: I0413 19:28:29.211158 2698 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Apr 13 19:28:29.211290 kubelet[2698]: I0413 19:28:29.211199 2698 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Apr 13 19:28:29.616302 kubelet[2698]: I0413 19:28:29.616070 2698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-xdt82" podStartSLOduration=19.075933318 podStartE2EDuration="32.616054677s" podCreationTimestamp="2026-04-13 19:27:57 +0000 UTC" firstStartedPulling="2026-04-13 19:28:15.413473714 +0000 UTC m=+37.458800684" lastFinishedPulling="2026-04-13 19:28:28.953595153 +0000 UTC m=+50.998922043" observedRunningTime="2026-04-13 19:28:29.614526573 +0000 UTC m=+51.659853583" watchObservedRunningTime="2026-04-13 19:28:29.616054677 +0000 UTC m=+51.661381607" Apr 13 19:28:29.616302 kubelet[2698]: I0413 19:28:29.616233 2698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-5b85766d88-7vqgk" podStartSLOduration=25.095775427 podStartE2EDuration="33.616227929s" podCreationTimestamp="2026-04-13 19:27:56 +0000 UTC" firstStartedPulling="2026-04-13 19:28:15.388855016 +0000 UTC m=+37.434181906" lastFinishedPulling="2026-04-13 19:28:23.909307438 +0000 UTC m=+45.954634408" observedRunningTime="2026-04-13 19:28:24.593966062 +0000 UTC m=+46.639292992" watchObservedRunningTime="2026-04-13 19:28:29.616227929 +0000 UTC m=+51.661554859" Apr 13 19:28:31.217204 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1254505645.mount: Deactivated successfully. Apr 13 19:28:31.235969 containerd[1620]: time="2026-04-13T19:28:31.235895794Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:28:31.238332 containerd[1620]: time="2026-04-13T19:28:31.237926128Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.31.4: active requests=0, bytes read=16426594" Apr 13 19:28:31.240458 containerd[1620]: time="2026-04-13T19:28:31.240399932Z" level=info msg="ImageCreate event name:\"sha256:19fab8e13a4d97732973f299576e43f89b889ceff6e3768f711f30e6ace1c662\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:28:31.245351 containerd[1620]: time="2026-04-13T19:28:31.245292215Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:28:31.246727 containerd[1620]: time="2026-04-13T19:28:31.246674467Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" with image id \"sha256:19fab8e13a4d97732973f299576e43f89b889ceff6e3768f711f30e6ace1c662\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\", size \"16426424\" in 2.292673565s" Apr 13 19:28:31.246727 containerd[1620]: time="2026-04-13T19:28:31.246719310Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" returns image reference \"sha256:19fab8e13a4d97732973f299576e43f89b889ceff6e3768f711f30e6ace1c662\"" Apr 13 19:28:31.254712 containerd[1620]: time="2026-04-13T19:28:31.254250128Z" level=info msg="CreateContainer within sandbox \"ce5d2c7ed91741c15ecd2e498d64e30c78f783a9290d9fec85eecdfcdd9e0821\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Apr 13 19:28:31.275520 containerd[1620]: time="2026-04-13T19:28:31.275392766Z" level=info msg="CreateContainer within sandbox \"ce5d2c7ed91741c15ecd2e498d64e30c78f783a9290d9fec85eecdfcdd9e0821\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"f20051f16a21b432d9eda7de500034e2ca79702bd8c7adda06594b1e6eae2af0\"" Apr 13 19:28:31.277616 containerd[1620]: time="2026-04-13T19:28:31.277573950Z" level=info msg="StartContainer for \"f20051f16a21b432d9eda7de500034e2ca79702bd8c7adda06594b1e6eae2af0\"" Apr 13 19:28:31.354335 containerd[1620]: time="2026-04-13T19:28:31.354201138Z" level=info msg="StartContainer for \"f20051f16a21b432d9eda7de500034e2ca79702bd8c7adda06594b1e6eae2af0\" returns successfully" Apr 13 19:28:31.619349 kubelet[2698]: I0413 19:28:31.619158 2698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-676675864c-cv674" podStartSLOduration=2.936977076 podStartE2EDuration="18.61913362s" podCreationTimestamp="2026-04-13 19:28:13 +0000 UTC" firstStartedPulling="2026-04-13 19:28:15.565461385 +0000 UTC m=+37.610788315" lastFinishedPulling="2026-04-13 19:28:31.247617929 +0000 UTC m=+53.292944859" observedRunningTime="2026-04-13 19:28:31.617883537 +0000 UTC m=+53.663210467" watchObservedRunningTime="2026-04-13 19:28:31.61913362 +0000 UTC m=+53.664460590" Apr 13 19:28:38.085336 containerd[1620]: time="2026-04-13T19:28:38.084871645Z" level=info msg="StopPodSandbox for \"7ebe2b83a4be97f70757917fbe7fe38dc310050797dd267576b135397b02ab5e\"" Apr 13 19:28:38.192915 containerd[1620]: 2026-04-13 19:28:38.138 [WARNING][5445] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7ebe2b83a4be97f70757917fbe7fe38dc310050797dd267576b135397b02ab5e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--8--01d4258341-k8s-calico--apiserver--7885d7c4d8--kjddq-eth0", GenerateName:"calico-apiserver-7885d7c4d8-", Namespace:"calico-system", SelfLink:"", UID:"5241b13b-999d-4851-9b4e-47561517f354", ResourceVersion:"970", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 19, 27, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7885d7c4d8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-8-01d4258341", ContainerID:"ae703df4a9502b5f305544be44901438bd0076121c1d3c73c39f103ea34851e0", Pod:"calico-apiserver-7885d7c4d8-kjddq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.20.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calidea0326cbf5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 19:28:38.192915 containerd[1620]: 2026-04-13 19:28:38.138 [INFO][5445] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="7ebe2b83a4be97f70757917fbe7fe38dc310050797dd267576b135397b02ab5e" Apr 13 19:28:38.192915 containerd[1620]: 2026-04-13 19:28:38.138 [INFO][5445] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7ebe2b83a4be97f70757917fbe7fe38dc310050797dd267576b135397b02ab5e" iface="eth0" netns="" Apr 13 19:28:38.192915 containerd[1620]: 2026-04-13 19:28:38.138 [INFO][5445] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="7ebe2b83a4be97f70757917fbe7fe38dc310050797dd267576b135397b02ab5e" Apr 13 19:28:38.192915 containerd[1620]: 2026-04-13 19:28:38.138 [INFO][5445] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="7ebe2b83a4be97f70757917fbe7fe38dc310050797dd267576b135397b02ab5e" Apr 13 19:28:38.192915 containerd[1620]: 2026-04-13 19:28:38.164 [INFO][5454] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="7ebe2b83a4be97f70757917fbe7fe38dc310050797dd267576b135397b02ab5e" HandleID="k8s-pod-network.7ebe2b83a4be97f70757917fbe7fe38dc310050797dd267576b135397b02ab5e" Workload="ci--4081--3--7--8--01d4258341-k8s-calico--apiserver--7885d7c4d8--kjddq-eth0" Apr 13 19:28:38.192915 containerd[1620]: 2026-04-13 19:28:38.164 [INFO][5454] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 19:28:38.192915 containerd[1620]: 2026-04-13 19:28:38.164 [INFO][5454] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 19:28:38.192915 containerd[1620]: 2026-04-13 19:28:38.180 [WARNING][5454] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="7ebe2b83a4be97f70757917fbe7fe38dc310050797dd267576b135397b02ab5e" HandleID="k8s-pod-network.7ebe2b83a4be97f70757917fbe7fe38dc310050797dd267576b135397b02ab5e" Workload="ci--4081--3--7--8--01d4258341-k8s-calico--apiserver--7885d7c4d8--kjddq-eth0" Apr 13 19:28:38.192915 containerd[1620]: 2026-04-13 19:28:38.180 [INFO][5454] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="7ebe2b83a4be97f70757917fbe7fe38dc310050797dd267576b135397b02ab5e" HandleID="k8s-pod-network.7ebe2b83a4be97f70757917fbe7fe38dc310050797dd267576b135397b02ab5e" Workload="ci--4081--3--7--8--01d4258341-k8s-calico--apiserver--7885d7c4d8--kjddq-eth0" Apr 13 19:28:38.192915 containerd[1620]: 2026-04-13 19:28:38.182 [INFO][5454] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 19:28:38.192915 containerd[1620]: 2026-04-13 19:28:38.187 [INFO][5445] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="7ebe2b83a4be97f70757917fbe7fe38dc310050797dd267576b135397b02ab5e" Apr 13 19:28:38.192915 containerd[1620]: time="2026-04-13T19:28:38.192720193Z" level=info msg="TearDown network for sandbox \"7ebe2b83a4be97f70757917fbe7fe38dc310050797dd267576b135397b02ab5e\" successfully" Apr 13 19:28:38.192915 containerd[1620]: time="2026-04-13T19:28:38.192743754Z" level=info msg="StopPodSandbox for \"7ebe2b83a4be97f70757917fbe7fe38dc310050797dd267576b135397b02ab5e\" returns successfully" Apr 13 19:28:38.193585 containerd[1620]: time="2026-04-13T19:28:38.193343750Z" level=info msg="RemovePodSandbox for \"7ebe2b83a4be97f70757917fbe7fe38dc310050797dd267576b135397b02ab5e\"" Apr 13 19:28:38.197618 containerd[1620]: time="2026-04-13T19:28:38.197566645Z" level=info msg="Forcibly stopping sandbox \"7ebe2b83a4be97f70757917fbe7fe38dc310050797dd267576b135397b02ab5e\"" Apr 13 19:28:38.282016 containerd[1620]: 2026-04-13 19:28:38.240 [WARNING][5468] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7ebe2b83a4be97f70757917fbe7fe38dc310050797dd267576b135397b02ab5e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--8--01d4258341-k8s-calico--apiserver--7885d7c4d8--kjddq-eth0", GenerateName:"calico-apiserver-7885d7c4d8-", Namespace:"calico-system", SelfLink:"", UID:"5241b13b-999d-4851-9b4e-47561517f354", ResourceVersion:"970", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 19, 27, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7885d7c4d8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-8-01d4258341", ContainerID:"ae703df4a9502b5f305544be44901438bd0076121c1d3c73c39f103ea34851e0", Pod:"calico-apiserver-7885d7c4d8-kjddq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.20.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calidea0326cbf5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 19:28:38.282016 containerd[1620]: 2026-04-13 19:28:38.240 [INFO][5468] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="7ebe2b83a4be97f70757917fbe7fe38dc310050797dd267576b135397b02ab5e" Apr 13 19:28:38.282016 containerd[1620]: 2026-04-13 19:28:38.240 [INFO][5468] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7ebe2b83a4be97f70757917fbe7fe38dc310050797dd267576b135397b02ab5e" iface="eth0" netns="" Apr 13 19:28:38.282016 containerd[1620]: 2026-04-13 19:28:38.240 [INFO][5468] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="7ebe2b83a4be97f70757917fbe7fe38dc310050797dd267576b135397b02ab5e" Apr 13 19:28:38.282016 containerd[1620]: 2026-04-13 19:28:38.240 [INFO][5468] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="7ebe2b83a4be97f70757917fbe7fe38dc310050797dd267576b135397b02ab5e" Apr 13 19:28:38.282016 containerd[1620]: 2026-04-13 19:28:38.262 [INFO][5476] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="7ebe2b83a4be97f70757917fbe7fe38dc310050797dd267576b135397b02ab5e" HandleID="k8s-pod-network.7ebe2b83a4be97f70757917fbe7fe38dc310050797dd267576b135397b02ab5e" Workload="ci--4081--3--7--8--01d4258341-k8s-calico--apiserver--7885d7c4d8--kjddq-eth0" Apr 13 19:28:38.282016 containerd[1620]: 2026-04-13 19:28:38.262 [INFO][5476] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 19:28:38.282016 containerd[1620]: 2026-04-13 19:28:38.262 [INFO][5476] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 19:28:38.282016 containerd[1620]: 2026-04-13 19:28:38.275 [WARNING][5476] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="7ebe2b83a4be97f70757917fbe7fe38dc310050797dd267576b135397b02ab5e" HandleID="k8s-pod-network.7ebe2b83a4be97f70757917fbe7fe38dc310050797dd267576b135397b02ab5e" Workload="ci--4081--3--7--8--01d4258341-k8s-calico--apiserver--7885d7c4d8--kjddq-eth0" Apr 13 19:28:38.282016 containerd[1620]: 2026-04-13 19:28:38.276 [INFO][5476] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="7ebe2b83a4be97f70757917fbe7fe38dc310050797dd267576b135397b02ab5e" HandleID="k8s-pod-network.7ebe2b83a4be97f70757917fbe7fe38dc310050797dd267576b135397b02ab5e" Workload="ci--4081--3--7--8--01d4258341-k8s-calico--apiserver--7885d7c4d8--kjddq-eth0" Apr 13 19:28:38.282016 containerd[1620]: 2026-04-13 19:28:38.278 [INFO][5476] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 19:28:38.282016 containerd[1620]: 2026-04-13 19:28:38.280 [INFO][5468] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="7ebe2b83a4be97f70757917fbe7fe38dc310050797dd267576b135397b02ab5e" Apr 13 19:28:38.283380 containerd[1620]: time="2026-04-13T19:28:38.282044343Z" level=info msg="TearDown network for sandbox \"7ebe2b83a4be97f70757917fbe7fe38dc310050797dd267576b135397b02ab5e\" successfully" Apr 13 19:28:38.287831 containerd[1620]: time="2026-04-13T19:28:38.287786729Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7ebe2b83a4be97f70757917fbe7fe38dc310050797dd267576b135397b02ab5e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 13 19:28:38.288002 containerd[1620]: time="2026-04-13T19:28:38.287869854Z" level=info msg="RemovePodSandbox \"7ebe2b83a4be97f70757917fbe7fe38dc310050797dd267576b135397b02ab5e\" returns successfully" Apr 13 19:28:38.288476 containerd[1620]: time="2026-04-13T19:28:38.288447009Z" level=info msg="StopPodSandbox for \"1c6f805bbb98b1f7adb756761a43fc0f7ad765c6738e53a622dde51ed3ff6777\"" Apr 13 19:28:38.381308 containerd[1620]: 2026-04-13 19:28:38.338 [WARNING][5490] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="1c6f805bbb98b1f7adb756761a43fc0f7ad765c6738e53a622dde51ed3ff6777" WorkloadEndpoint="ci--4081--3--7--8--01d4258341-k8s-whisker--74b58fcbcd--6wksg-eth0" Apr 13 19:28:38.381308 containerd[1620]: 2026-04-13 19:28:38.338 [INFO][5490] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="1c6f805bbb98b1f7adb756761a43fc0f7ad765c6738e53a622dde51ed3ff6777" Apr 13 19:28:38.381308 containerd[1620]: 2026-04-13 19:28:38.338 [INFO][5490] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1c6f805bbb98b1f7adb756761a43fc0f7ad765c6738e53a622dde51ed3ff6777" iface="eth0" netns="" Apr 13 19:28:38.381308 containerd[1620]: 2026-04-13 19:28:38.338 [INFO][5490] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="1c6f805bbb98b1f7adb756761a43fc0f7ad765c6738e53a622dde51ed3ff6777" Apr 13 19:28:38.381308 containerd[1620]: 2026-04-13 19:28:38.338 [INFO][5490] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="1c6f805bbb98b1f7adb756761a43fc0f7ad765c6738e53a622dde51ed3ff6777" Apr 13 19:28:38.381308 containerd[1620]: 2026-04-13 19:28:38.364 [INFO][5498] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="1c6f805bbb98b1f7adb756761a43fc0f7ad765c6738e53a622dde51ed3ff6777" HandleID="k8s-pod-network.1c6f805bbb98b1f7adb756761a43fc0f7ad765c6738e53a622dde51ed3ff6777" Workload="ci--4081--3--7--8--01d4258341-k8s-whisker--74b58fcbcd--6wksg-eth0" Apr 13 19:28:38.381308 containerd[1620]: 2026-04-13 19:28:38.364 [INFO][5498] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 19:28:38.381308 containerd[1620]: 2026-04-13 19:28:38.364 [INFO][5498] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 19:28:38.381308 containerd[1620]: 2026-04-13 19:28:38.374 [WARNING][5498] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="1c6f805bbb98b1f7adb756761a43fc0f7ad765c6738e53a622dde51ed3ff6777" HandleID="k8s-pod-network.1c6f805bbb98b1f7adb756761a43fc0f7ad765c6738e53a622dde51ed3ff6777" Workload="ci--4081--3--7--8--01d4258341-k8s-whisker--74b58fcbcd--6wksg-eth0" Apr 13 19:28:38.381308 containerd[1620]: 2026-04-13 19:28:38.375 [INFO][5498] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="1c6f805bbb98b1f7adb756761a43fc0f7ad765c6738e53a622dde51ed3ff6777" HandleID="k8s-pod-network.1c6f805bbb98b1f7adb756761a43fc0f7ad765c6738e53a622dde51ed3ff6777" Workload="ci--4081--3--7--8--01d4258341-k8s-whisker--74b58fcbcd--6wksg-eth0" Apr 13 19:28:38.381308 containerd[1620]: 2026-04-13 19:28:38.377 [INFO][5498] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 19:28:38.381308 containerd[1620]: 2026-04-13 19:28:38.379 [INFO][5490] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="1c6f805bbb98b1f7adb756761a43fc0f7ad765c6738e53a622dde51ed3ff6777" Apr 13 19:28:38.381308 containerd[1620]: time="2026-04-13T19:28:38.381282571Z" level=info msg="TearDown network for sandbox \"1c6f805bbb98b1f7adb756761a43fc0f7ad765c6738e53a622dde51ed3ff6777\" successfully" Apr 13 19:28:38.381814 containerd[1620]: time="2026-04-13T19:28:38.381319653Z" level=info msg="StopPodSandbox for \"1c6f805bbb98b1f7adb756761a43fc0f7ad765c6738e53a622dde51ed3ff6777\" returns successfully" Apr 13 19:28:38.382605 containerd[1620]: time="2026-04-13T19:28:38.382574009Z" level=info msg="RemovePodSandbox for \"1c6f805bbb98b1f7adb756761a43fc0f7ad765c6738e53a622dde51ed3ff6777\"" Apr 13 19:28:38.382698 containerd[1620]: time="2026-04-13T19:28:38.382638053Z" level=info msg="Forcibly stopping sandbox \"1c6f805bbb98b1f7adb756761a43fc0f7ad765c6738e53a622dde51ed3ff6777\"" Apr 13 19:28:38.486254 containerd[1620]: 2026-04-13 19:28:38.423 [WARNING][5512] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="1c6f805bbb98b1f7adb756761a43fc0f7ad765c6738e53a622dde51ed3ff6777" WorkloadEndpoint="ci--4081--3--7--8--01d4258341-k8s-whisker--74b58fcbcd--6wksg-eth0" Apr 13 19:28:38.486254 containerd[1620]: 2026-04-13 19:28:38.423 [INFO][5512] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="1c6f805bbb98b1f7adb756761a43fc0f7ad765c6738e53a622dde51ed3ff6777" Apr 13 19:28:38.486254 containerd[1620]: 2026-04-13 19:28:38.423 [INFO][5512] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1c6f805bbb98b1f7adb756761a43fc0f7ad765c6738e53a622dde51ed3ff6777" iface="eth0" netns="" Apr 13 19:28:38.486254 containerd[1620]: 2026-04-13 19:28:38.423 [INFO][5512] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="1c6f805bbb98b1f7adb756761a43fc0f7ad765c6738e53a622dde51ed3ff6777" Apr 13 19:28:38.486254 containerd[1620]: 2026-04-13 19:28:38.423 [INFO][5512] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="1c6f805bbb98b1f7adb756761a43fc0f7ad765c6738e53a622dde51ed3ff6777" Apr 13 19:28:38.486254 containerd[1620]: 2026-04-13 19:28:38.465 [INFO][5519] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="1c6f805bbb98b1f7adb756761a43fc0f7ad765c6738e53a622dde51ed3ff6777" HandleID="k8s-pod-network.1c6f805bbb98b1f7adb756761a43fc0f7ad765c6738e53a622dde51ed3ff6777" Workload="ci--4081--3--7--8--01d4258341-k8s-whisker--74b58fcbcd--6wksg-eth0" Apr 13 19:28:38.486254 containerd[1620]: 2026-04-13 19:28:38.465 [INFO][5519] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 19:28:38.486254 containerd[1620]: 2026-04-13 19:28:38.465 [INFO][5519] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 19:28:38.486254 containerd[1620]: 2026-04-13 19:28:38.478 [WARNING][5519] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="1c6f805bbb98b1f7adb756761a43fc0f7ad765c6738e53a622dde51ed3ff6777" HandleID="k8s-pod-network.1c6f805bbb98b1f7adb756761a43fc0f7ad765c6738e53a622dde51ed3ff6777" Workload="ci--4081--3--7--8--01d4258341-k8s-whisker--74b58fcbcd--6wksg-eth0" Apr 13 19:28:38.486254 containerd[1620]: 2026-04-13 19:28:38.478 [INFO][5519] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="1c6f805bbb98b1f7adb756761a43fc0f7ad765c6738e53a622dde51ed3ff6777" HandleID="k8s-pod-network.1c6f805bbb98b1f7adb756761a43fc0f7ad765c6738e53a622dde51ed3ff6777" Workload="ci--4081--3--7--8--01d4258341-k8s-whisker--74b58fcbcd--6wksg-eth0" Apr 13 19:28:38.486254 containerd[1620]: 2026-04-13 19:28:38.481 [INFO][5519] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 19:28:38.486254 containerd[1620]: 2026-04-13 19:28:38.483 [INFO][5512] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="1c6f805bbb98b1f7adb756761a43fc0f7ad765c6738e53a622dde51ed3ff6777" Apr 13 19:28:38.486819 containerd[1620]: time="2026-04-13T19:28:38.486321990Z" level=info msg="TearDown network for sandbox \"1c6f805bbb98b1f7adb756761a43fc0f7ad765c6738e53a622dde51ed3ff6777\" successfully" Apr 13 19:28:38.491868 containerd[1620]: time="2026-04-13T19:28:38.491767358Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1c6f805bbb98b1f7adb756761a43fc0f7ad765c6738e53a622dde51ed3ff6777\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 13 19:28:38.492046 containerd[1620]: time="2026-04-13T19:28:38.491896566Z" level=info msg="RemovePodSandbox \"1c6f805bbb98b1f7adb756761a43fc0f7ad765c6738e53a622dde51ed3ff6777\" returns successfully" Apr 13 19:28:38.492558 containerd[1620]: time="2026-04-13T19:28:38.492504123Z" level=info msg="StopPodSandbox for \"a0ee31723bd72bce2343f0718473064c9fd74311836dbc44ae2bc10710eee40f\"" Apr 13 19:28:38.579382 containerd[1620]: 2026-04-13 19:28:38.533 [WARNING][5534] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a0ee31723bd72bce2343f0718473064c9fd74311836dbc44ae2bc10710eee40f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--8--01d4258341-k8s-calico--kube--controllers--669bfd5bbc--9897g-eth0", GenerateName:"calico-kube-controllers-669bfd5bbc-", Namespace:"calico-system", SelfLink:"", UID:"8dfd1508-5560-4545-9951-14512e76963d", ResourceVersion:"986", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 19, 27, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"669bfd5bbc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-8-01d4258341", ContainerID:"64a94d408824bf7ab8749586db221cf9f7036b2d623c5d5f79bfff2a8891b7f1", Pod:"calico-kube-controllers-669bfd5bbc-9897g", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.20.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliab859c802f1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 19:28:38.579382 containerd[1620]: 2026-04-13 19:28:38.533 [INFO][5534] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="a0ee31723bd72bce2343f0718473064c9fd74311836dbc44ae2bc10710eee40f" Apr 13 19:28:38.579382 containerd[1620]: 2026-04-13 19:28:38.533 [INFO][5534] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a0ee31723bd72bce2343f0718473064c9fd74311836dbc44ae2bc10710eee40f" iface="eth0" netns="" Apr 13 19:28:38.579382 containerd[1620]: 2026-04-13 19:28:38.533 [INFO][5534] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="a0ee31723bd72bce2343f0718473064c9fd74311836dbc44ae2bc10710eee40f" Apr 13 19:28:38.579382 containerd[1620]: 2026-04-13 19:28:38.533 [INFO][5534] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="a0ee31723bd72bce2343f0718473064c9fd74311836dbc44ae2bc10710eee40f" Apr 13 19:28:38.579382 containerd[1620]: 2026-04-13 19:28:38.553 [INFO][5541] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="a0ee31723bd72bce2343f0718473064c9fd74311836dbc44ae2bc10710eee40f" HandleID="k8s-pod-network.a0ee31723bd72bce2343f0718473064c9fd74311836dbc44ae2bc10710eee40f" Workload="ci--4081--3--7--8--01d4258341-k8s-calico--kube--controllers--669bfd5bbc--9897g-eth0" Apr 13 19:28:38.579382 containerd[1620]: 2026-04-13 19:28:38.553 [INFO][5541] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 19:28:38.579382 containerd[1620]: 2026-04-13 19:28:38.554 [INFO][5541] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 19:28:38.579382 containerd[1620]: 2026-04-13 19:28:38.567 [WARNING][5541] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="a0ee31723bd72bce2343f0718473064c9fd74311836dbc44ae2bc10710eee40f" HandleID="k8s-pod-network.a0ee31723bd72bce2343f0718473064c9fd74311836dbc44ae2bc10710eee40f" Workload="ci--4081--3--7--8--01d4258341-k8s-calico--kube--controllers--669bfd5bbc--9897g-eth0" Apr 13 19:28:38.579382 containerd[1620]: 2026-04-13 19:28:38.567 [INFO][5541] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="a0ee31723bd72bce2343f0718473064c9fd74311836dbc44ae2bc10710eee40f" HandleID="k8s-pod-network.a0ee31723bd72bce2343f0718473064c9fd74311836dbc44ae2bc10710eee40f" Workload="ci--4081--3--7--8--01d4258341-k8s-calico--kube--controllers--669bfd5bbc--9897g-eth0" Apr 13 19:28:38.579382 containerd[1620]: 2026-04-13 19:28:38.571 [INFO][5541] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 19:28:38.579382 containerd[1620]: 2026-04-13 19:28:38.576 [INFO][5534] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="a0ee31723bd72bce2343f0718473064c9fd74311836dbc44ae2bc10710eee40f" Apr 13 19:28:38.579382 containerd[1620]: time="2026-04-13T19:28:38.578393626Z" level=info msg="TearDown network for sandbox \"a0ee31723bd72bce2343f0718473064c9fd74311836dbc44ae2bc10710eee40f\" successfully" Apr 13 19:28:38.579382 containerd[1620]: time="2026-04-13T19:28:38.578419147Z" level=info msg="StopPodSandbox for \"a0ee31723bd72bce2343f0718473064c9fd74311836dbc44ae2bc10710eee40f\" returns successfully" Apr 13 19:28:38.580587 containerd[1620]: time="2026-04-13T19:28:38.580415668Z" level=info msg="RemovePodSandbox for \"a0ee31723bd72bce2343f0718473064c9fd74311836dbc44ae2bc10710eee40f\"" Apr 13 19:28:38.580587 containerd[1620]: time="2026-04-13T19:28:38.580462230Z" level=info msg="Forcibly stopping sandbox \"a0ee31723bd72bce2343f0718473064c9fd74311836dbc44ae2bc10710eee40f\"" Apr 13 19:28:38.670815 containerd[1620]: 2026-04-13 19:28:38.618 [WARNING][5555] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a0ee31723bd72bce2343f0718473064c9fd74311836dbc44ae2bc10710eee40f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--8--01d4258341-k8s-calico--kube--controllers--669bfd5bbc--9897g-eth0", GenerateName:"calico-kube-controllers-669bfd5bbc-", Namespace:"calico-system", SelfLink:"", UID:"8dfd1508-5560-4545-9951-14512e76963d", ResourceVersion:"986", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 19, 27, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"669bfd5bbc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-8-01d4258341", ContainerID:"64a94d408824bf7ab8749586db221cf9f7036b2d623c5d5f79bfff2a8891b7f1", Pod:"calico-kube-controllers-669bfd5bbc-9897g", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.20.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliab859c802f1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 19:28:38.670815 containerd[1620]: 2026-04-13 19:28:38.618 [INFO][5555] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="a0ee31723bd72bce2343f0718473064c9fd74311836dbc44ae2bc10710eee40f" Apr 13 19:28:38.670815 containerd[1620]: 2026-04-13 19:28:38.618 [INFO][5555] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a0ee31723bd72bce2343f0718473064c9fd74311836dbc44ae2bc10710eee40f" iface="eth0" netns="" Apr 13 19:28:38.670815 containerd[1620]: 2026-04-13 19:28:38.618 [INFO][5555] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="a0ee31723bd72bce2343f0718473064c9fd74311836dbc44ae2bc10710eee40f" Apr 13 19:28:38.670815 containerd[1620]: 2026-04-13 19:28:38.618 [INFO][5555] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="a0ee31723bd72bce2343f0718473064c9fd74311836dbc44ae2bc10710eee40f" Apr 13 19:28:38.670815 containerd[1620]: 2026-04-13 19:28:38.645 [INFO][5563] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="a0ee31723bd72bce2343f0718473064c9fd74311836dbc44ae2bc10710eee40f" HandleID="k8s-pod-network.a0ee31723bd72bce2343f0718473064c9fd74311836dbc44ae2bc10710eee40f" Workload="ci--4081--3--7--8--01d4258341-k8s-calico--kube--controllers--669bfd5bbc--9897g-eth0" Apr 13 19:28:38.670815 containerd[1620]: 2026-04-13 19:28:38.645 [INFO][5563] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 19:28:38.670815 containerd[1620]: 2026-04-13 19:28:38.645 [INFO][5563] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 19:28:38.670815 containerd[1620]: 2026-04-13 19:28:38.660 [WARNING][5563] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="a0ee31723bd72bce2343f0718473064c9fd74311836dbc44ae2bc10710eee40f" HandleID="k8s-pod-network.a0ee31723bd72bce2343f0718473064c9fd74311836dbc44ae2bc10710eee40f" Workload="ci--4081--3--7--8--01d4258341-k8s-calico--kube--controllers--669bfd5bbc--9897g-eth0" Apr 13 19:28:38.670815 containerd[1620]: 2026-04-13 19:28:38.660 [INFO][5563] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="a0ee31723bd72bce2343f0718473064c9fd74311836dbc44ae2bc10710eee40f" HandleID="k8s-pod-network.a0ee31723bd72bce2343f0718473064c9fd74311836dbc44ae2bc10710eee40f" Workload="ci--4081--3--7--8--01d4258341-k8s-calico--kube--controllers--669bfd5bbc--9897g-eth0" Apr 13 19:28:38.670815 containerd[1620]: 2026-04-13 19:28:38.665 [INFO][5563] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 19:28:38.670815 containerd[1620]: 2026-04-13 19:28:38.667 [INFO][5555] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="a0ee31723bd72bce2343f0718473064c9fd74311836dbc44ae2bc10710eee40f" Apr 13 19:28:38.671577 containerd[1620]: time="2026-04-13T19:28:38.670875886Z" level=info msg="TearDown network for sandbox \"a0ee31723bd72bce2343f0718473064c9fd74311836dbc44ae2bc10710eee40f\" successfully" Apr 13 19:28:38.677087 containerd[1620]: time="2026-04-13T19:28:38.677019257Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a0ee31723bd72bce2343f0718473064c9fd74311836dbc44ae2bc10710eee40f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 13 19:28:38.677267 containerd[1620]: time="2026-04-13T19:28:38.677107022Z" level=info msg="RemovePodSandbox \"a0ee31723bd72bce2343f0718473064c9fd74311836dbc44ae2bc10710eee40f\" returns successfully" Apr 13 19:28:38.677913 containerd[1620]: time="2026-04-13T19:28:38.677885349Z" level=info msg="StopPodSandbox for \"92e1b722d97e40a63aea6ba384e730458e3267b6d027fb8f09b706122c5d0c7b\"" Apr 13 19:28:38.765442 containerd[1620]: 2026-04-13 19:28:38.721 [WARNING][5577] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="92e1b722d97e40a63aea6ba384e730458e3267b6d027fb8f09b706122c5d0c7b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--8--01d4258341-k8s-csi--node--driver--xdt82-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"cb288a13-6f3a-4bf1-9058-ef9f893933ae", ResourceVersion:"1026", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 19, 27, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-8-01d4258341", ContainerID:"ac9ff70f3ca24659a7f8ff991e226ca45e68357a59c8869235e63fbbb87eece4", Pod:"csi-node-driver-xdt82", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.20.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"califd3e778afe9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 19:28:38.765442 containerd[1620]: 2026-04-13 19:28:38.722 [INFO][5577] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="92e1b722d97e40a63aea6ba384e730458e3267b6d027fb8f09b706122c5d0c7b" Apr 13 19:28:38.765442 containerd[1620]: 2026-04-13 19:28:38.722 [INFO][5577] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="92e1b722d97e40a63aea6ba384e730458e3267b6d027fb8f09b706122c5d0c7b" iface="eth0" netns="" Apr 13 19:28:38.765442 containerd[1620]: 2026-04-13 19:28:38.722 [INFO][5577] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="92e1b722d97e40a63aea6ba384e730458e3267b6d027fb8f09b706122c5d0c7b" Apr 13 19:28:38.765442 containerd[1620]: 2026-04-13 19:28:38.722 [INFO][5577] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="92e1b722d97e40a63aea6ba384e730458e3267b6d027fb8f09b706122c5d0c7b" Apr 13 19:28:38.765442 containerd[1620]: 2026-04-13 19:28:38.744 [INFO][5584] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="92e1b722d97e40a63aea6ba384e730458e3267b6d027fb8f09b706122c5d0c7b" HandleID="k8s-pod-network.92e1b722d97e40a63aea6ba384e730458e3267b6d027fb8f09b706122c5d0c7b" Workload="ci--4081--3--7--8--01d4258341-k8s-csi--node--driver--xdt82-eth0" Apr 13 19:28:38.765442 containerd[1620]: 2026-04-13 19:28:38.744 [INFO][5584] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 19:28:38.765442 containerd[1620]: 2026-04-13 19:28:38.744 [INFO][5584] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 19:28:38.765442 containerd[1620]: 2026-04-13 19:28:38.756 [WARNING][5584] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="92e1b722d97e40a63aea6ba384e730458e3267b6d027fb8f09b706122c5d0c7b" HandleID="k8s-pod-network.92e1b722d97e40a63aea6ba384e730458e3267b6d027fb8f09b706122c5d0c7b" Workload="ci--4081--3--7--8--01d4258341-k8s-csi--node--driver--xdt82-eth0" Apr 13 19:28:38.765442 containerd[1620]: 2026-04-13 19:28:38.756 [INFO][5584] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="92e1b722d97e40a63aea6ba384e730458e3267b6d027fb8f09b706122c5d0c7b" HandleID="k8s-pod-network.92e1b722d97e40a63aea6ba384e730458e3267b6d027fb8f09b706122c5d0c7b" Workload="ci--4081--3--7--8--01d4258341-k8s-csi--node--driver--xdt82-eth0" Apr 13 19:28:38.765442 containerd[1620]: 2026-04-13 19:28:38.760 [INFO][5584] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 19:28:38.765442 containerd[1620]: 2026-04-13 19:28:38.762 [INFO][5577] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="92e1b722d97e40a63aea6ba384e730458e3267b6d027fb8f09b706122c5d0c7b" Apr 13 19:28:38.765442 containerd[1620]: time="2026-04-13T19:28:38.765281823Z" level=info msg="TearDown network for sandbox \"92e1b722d97e40a63aea6ba384e730458e3267b6d027fb8f09b706122c5d0c7b\" successfully" Apr 13 19:28:38.765442 containerd[1620]: time="2026-04-13T19:28:38.765315225Z" level=info msg="StopPodSandbox for \"92e1b722d97e40a63aea6ba384e730458e3267b6d027fb8f09b706122c5d0c7b\" returns successfully" Apr 13 19:28:38.767147 containerd[1620]: time="2026-04-13T19:28:38.766436533Z" level=info msg="RemovePodSandbox for \"92e1b722d97e40a63aea6ba384e730458e3267b6d027fb8f09b706122c5d0c7b\"" Apr 13 19:28:38.767147 containerd[1620]: time="2026-04-13T19:28:38.766481496Z" level=info msg="Forcibly stopping sandbox \"92e1b722d97e40a63aea6ba384e730458e3267b6d027fb8f09b706122c5d0c7b\"" Apr 13 19:28:38.853962 containerd[1620]: 2026-04-13 19:28:38.808 [WARNING][5598] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="92e1b722d97e40a63aea6ba384e730458e3267b6d027fb8f09b706122c5d0c7b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--8--01d4258341-k8s-csi--node--driver--xdt82-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"cb288a13-6f3a-4bf1-9058-ef9f893933ae", ResourceVersion:"1026", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 19, 27, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-8-01d4258341", ContainerID:"ac9ff70f3ca24659a7f8ff991e226ca45e68357a59c8869235e63fbbb87eece4", Pod:"csi-node-driver-xdt82", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.20.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"califd3e778afe9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 19:28:38.853962 containerd[1620]: 2026-04-13 19:28:38.808 [INFO][5598] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="92e1b722d97e40a63aea6ba384e730458e3267b6d027fb8f09b706122c5d0c7b" Apr 13 19:28:38.853962 containerd[1620]: 2026-04-13 19:28:38.808 [INFO][5598] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="92e1b722d97e40a63aea6ba384e730458e3267b6d027fb8f09b706122c5d0c7b" iface="eth0" netns="" Apr 13 19:28:38.853962 containerd[1620]: 2026-04-13 19:28:38.808 [INFO][5598] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="92e1b722d97e40a63aea6ba384e730458e3267b6d027fb8f09b706122c5d0c7b" Apr 13 19:28:38.853962 containerd[1620]: 2026-04-13 19:28:38.808 [INFO][5598] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="92e1b722d97e40a63aea6ba384e730458e3267b6d027fb8f09b706122c5d0c7b" Apr 13 19:28:38.853962 containerd[1620]: 2026-04-13 19:28:38.832 [INFO][5605] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="92e1b722d97e40a63aea6ba384e730458e3267b6d027fb8f09b706122c5d0c7b" HandleID="k8s-pod-network.92e1b722d97e40a63aea6ba384e730458e3267b6d027fb8f09b706122c5d0c7b" Workload="ci--4081--3--7--8--01d4258341-k8s-csi--node--driver--xdt82-eth0" Apr 13 19:28:38.853962 containerd[1620]: 2026-04-13 19:28:38.832 [INFO][5605] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 19:28:38.853962 containerd[1620]: 2026-04-13 19:28:38.832 [INFO][5605] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 19:28:38.853962 containerd[1620]: 2026-04-13 19:28:38.847 [WARNING][5605] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="92e1b722d97e40a63aea6ba384e730458e3267b6d027fb8f09b706122c5d0c7b" HandleID="k8s-pod-network.92e1b722d97e40a63aea6ba384e730458e3267b6d027fb8f09b706122c5d0c7b" Workload="ci--4081--3--7--8--01d4258341-k8s-csi--node--driver--xdt82-eth0" Apr 13 19:28:38.853962 containerd[1620]: 2026-04-13 19:28:38.847 [INFO][5605] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="92e1b722d97e40a63aea6ba384e730458e3267b6d027fb8f09b706122c5d0c7b" HandleID="k8s-pod-network.92e1b722d97e40a63aea6ba384e730458e3267b6d027fb8f09b706122c5d0c7b" Workload="ci--4081--3--7--8--01d4258341-k8s-csi--node--driver--xdt82-eth0" Apr 13 19:28:38.853962 containerd[1620]: 2026-04-13 19:28:38.849 [INFO][5605] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 19:28:38.853962 containerd[1620]: 2026-04-13 19:28:38.851 [INFO][5598] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="92e1b722d97e40a63aea6ba384e730458e3267b6d027fb8f09b706122c5d0c7b" Apr 13 19:28:38.854633 containerd[1620]: time="2026-04-13T19:28:38.854061340Z" level=info msg="TearDown network for sandbox \"92e1b722d97e40a63aea6ba384e730458e3267b6d027fb8f09b706122c5d0c7b\" successfully" Apr 13 19:28:38.870723 containerd[1620]: time="2026-04-13T19:28:38.870655302Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"92e1b722d97e40a63aea6ba384e730458e3267b6d027fb8f09b706122c5d0c7b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 13 19:28:38.871293 containerd[1620]: time="2026-04-13T19:28:38.870735507Z" level=info msg="RemovePodSandbox \"92e1b722d97e40a63aea6ba384e730458e3267b6d027fb8f09b706122c5d0c7b\" returns successfully" Apr 13 19:28:38.871422 containerd[1620]: time="2026-04-13T19:28:38.871384826Z" level=info msg="StopPodSandbox for \"270a617cd62a55f5d50a4b18cc151bd74e15395c83e4edd901d7cc83f7b91b4c\"" Apr 13 19:28:38.964339 containerd[1620]: 2026-04-13 19:28:38.913 [WARNING][5620] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="270a617cd62a55f5d50a4b18cc151bd74e15395c83e4edd901d7cc83f7b91b4c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--8--01d4258341-k8s-coredns--674b8bbfcf--ft6d4-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"781d0594-7e8f-4f56-9e23-737cbe5d8293", ResourceVersion:"940", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 19, 27, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-8-01d4258341", ContainerID:"2039eea45940860c3fa191454219cdcab7fc4cb988de8f19feacc116e507d863", Pod:"coredns-674b8bbfcf-ft6d4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.20.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6e423b9195c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 19:28:38.964339 containerd[1620]: 2026-04-13 19:28:38.913 [INFO][5620] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="270a617cd62a55f5d50a4b18cc151bd74e15395c83e4edd901d7cc83f7b91b4c" Apr 13 19:28:38.964339 containerd[1620]: 2026-04-13 19:28:38.913 [INFO][5620] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="270a617cd62a55f5d50a4b18cc151bd74e15395c83e4edd901d7cc83f7b91b4c" iface="eth0" netns="" Apr 13 19:28:38.964339 containerd[1620]: 2026-04-13 19:28:38.913 [INFO][5620] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="270a617cd62a55f5d50a4b18cc151bd74e15395c83e4edd901d7cc83f7b91b4c" Apr 13 19:28:38.964339 containerd[1620]: 2026-04-13 19:28:38.913 [INFO][5620] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="270a617cd62a55f5d50a4b18cc151bd74e15395c83e4edd901d7cc83f7b91b4c" Apr 13 19:28:38.964339 containerd[1620]: 2026-04-13 19:28:38.942 [INFO][5627] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="270a617cd62a55f5d50a4b18cc151bd74e15395c83e4edd901d7cc83f7b91b4c" HandleID="k8s-pod-network.270a617cd62a55f5d50a4b18cc151bd74e15395c83e4edd901d7cc83f7b91b4c" Workload="ci--4081--3--7--8--01d4258341-k8s-coredns--674b8bbfcf--ft6d4-eth0" Apr 13 19:28:38.964339 containerd[1620]: 2026-04-13 19:28:38.943 [INFO][5627] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 19:28:38.964339 containerd[1620]: 2026-04-13 19:28:38.943 [INFO][5627] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 19:28:38.964339 containerd[1620]: 2026-04-13 19:28:38.957 [WARNING][5627] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="270a617cd62a55f5d50a4b18cc151bd74e15395c83e4edd901d7cc83f7b91b4c" HandleID="k8s-pod-network.270a617cd62a55f5d50a4b18cc151bd74e15395c83e4edd901d7cc83f7b91b4c" Workload="ci--4081--3--7--8--01d4258341-k8s-coredns--674b8bbfcf--ft6d4-eth0" Apr 13 19:28:38.964339 containerd[1620]: 2026-04-13 19:28:38.957 [INFO][5627] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="270a617cd62a55f5d50a4b18cc151bd74e15395c83e4edd901d7cc83f7b91b4c" HandleID="k8s-pod-network.270a617cd62a55f5d50a4b18cc151bd74e15395c83e4edd901d7cc83f7b91b4c" Workload="ci--4081--3--7--8--01d4258341-k8s-coredns--674b8bbfcf--ft6d4-eth0" Apr 13 19:28:38.964339 containerd[1620]: 2026-04-13 19:28:38.959 [INFO][5627] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 19:28:38.964339 containerd[1620]: 2026-04-13 19:28:38.962 [INFO][5620] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="270a617cd62a55f5d50a4b18cc151bd74e15395c83e4edd901d7cc83f7b91b4c" Apr 13 19:28:38.964339 containerd[1620]: time="2026-04-13T19:28:38.964295432Z" level=info msg="TearDown network for sandbox \"270a617cd62a55f5d50a4b18cc151bd74e15395c83e4edd901d7cc83f7b91b4c\" successfully" Apr 13 19:28:38.964339 containerd[1620]: time="2026-04-13T19:28:38.964323314Z" level=info msg="StopPodSandbox for \"270a617cd62a55f5d50a4b18cc151bd74e15395c83e4edd901d7cc83f7b91b4c\" returns successfully" Apr 13 19:28:38.967126 containerd[1620]: time="2026-04-13T19:28:38.964862467Z" level=info msg="RemovePodSandbox for \"270a617cd62a55f5d50a4b18cc151bd74e15395c83e4edd901d7cc83f7b91b4c\"" Apr 13 19:28:38.967126 containerd[1620]: time="2026-04-13T19:28:38.964906029Z" level=info msg="Forcibly stopping sandbox \"270a617cd62a55f5d50a4b18cc151bd74e15395c83e4edd901d7cc83f7b91b4c\"" Apr 13 19:28:39.055503 containerd[1620]: 2026-04-13 19:28:39.009 [WARNING][5641] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="270a617cd62a55f5d50a4b18cc151bd74e15395c83e4edd901d7cc83f7b91b4c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--8--01d4258341-k8s-coredns--674b8bbfcf--ft6d4-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"781d0594-7e8f-4f56-9e23-737cbe5d8293", ResourceVersion:"940", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 19, 27, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-8-01d4258341", ContainerID:"2039eea45940860c3fa191454219cdcab7fc4cb988de8f19feacc116e507d863", Pod:"coredns-674b8bbfcf-ft6d4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.20.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6e423b9195c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 19:28:39.055503 containerd[1620]: 2026-04-13 19:28:39.010 [INFO][5641] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="270a617cd62a55f5d50a4b18cc151bd74e15395c83e4edd901d7cc83f7b91b4c" Apr 13 19:28:39.055503 containerd[1620]: 2026-04-13 19:28:39.010 [INFO][5641] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="270a617cd62a55f5d50a4b18cc151bd74e15395c83e4edd901d7cc83f7b91b4c" iface="eth0" netns="" Apr 13 19:28:39.055503 containerd[1620]: 2026-04-13 19:28:39.010 [INFO][5641] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="270a617cd62a55f5d50a4b18cc151bd74e15395c83e4edd901d7cc83f7b91b4c" Apr 13 19:28:39.055503 containerd[1620]: 2026-04-13 19:28:39.010 [INFO][5641] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="270a617cd62a55f5d50a4b18cc151bd74e15395c83e4edd901d7cc83f7b91b4c" Apr 13 19:28:39.055503 containerd[1620]: 2026-04-13 19:28:39.032 [INFO][5649] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="270a617cd62a55f5d50a4b18cc151bd74e15395c83e4edd901d7cc83f7b91b4c" HandleID="k8s-pod-network.270a617cd62a55f5d50a4b18cc151bd74e15395c83e4edd901d7cc83f7b91b4c" Workload="ci--4081--3--7--8--01d4258341-k8s-coredns--674b8bbfcf--ft6d4-eth0" Apr 13 19:28:39.055503 containerd[1620]: 2026-04-13 19:28:39.033 [INFO][5649] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 19:28:39.055503 containerd[1620]: 2026-04-13 19:28:39.033 [INFO][5649] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 19:28:39.055503 containerd[1620]: 2026-04-13 19:28:39.045 [WARNING][5649] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="270a617cd62a55f5d50a4b18cc151bd74e15395c83e4edd901d7cc83f7b91b4c" HandleID="k8s-pod-network.270a617cd62a55f5d50a4b18cc151bd74e15395c83e4edd901d7cc83f7b91b4c" Workload="ci--4081--3--7--8--01d4258341-k8s-coredns--674b8bbfcf--ft6d4-eth0" Apr 13 19:28:39.055503 containerd[1620]: 2026-04-13 19:28:39.045 [INFO][5649] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="270a617cd62a55f5d50a4b18cc151bd74e15395c83e4edd901d7cc83f7b91b4c" HandleID="k8s-pod-network.270a617cd62a55f5d50a4b18cc151bd74e15395c83e4edd901d7cc83f7b91b4c" Workload="ci--4081--3--7--8--01d4258341-k8s-coredns--674b8bbfcf--ft6d4-eth0" Apr 13 19:28:39.055503 containerd[1620]: 2026-04-13 19:28:39.049 [INFO][5649] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 19:28:39.055503 containerd[1620]: 2026-04-13 19:28:39.053 [INFO][5641] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="270a617cd62a55f5d50a4b18cc151bd74e15395c83e4edd901d7cc83f7b91b4c" Apr 13 19:28:39.056109 containerd[1620]: time="2026-04-13T19:28:39.055569066Z" level=info msg="TearDown network for sandbox \"270a617cd62a55f5d50a4b18cc151bd74e15395c83e4edd901d7cc83f7b91b4c\" successfully" Apr 13 19:28:39.060302 containerd[1620]: time="2026-04-13T19:28:39.060252185Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"270a617cd62a55f5d50a4b18cc151bd74e15395c83e4edd901d7cc83f7b91b4c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 13 19:28:39.060428 containerd[1620]: time="2026-04-13T19:28:39.060369912Z" level=info msg="RemovePodSandbox \"270a617cd62a55f5d50a4b18cc151bd74e15395c83e4edd901d7cc83f7b91b4c\" returns successfully" Apr 13 19:28:39.061051 containerd[1620]: time="2026-04-13T19:28:39.060970188Z" level=info msg="StopPodSandbox for \"f0ca69d536eeedfb58c75ba117034fcd40a197cb7274707616e93c22b87fdae7\"" Apr 13 19:28:39.143674 containerd[1620]: 2026-04-13 19:28:39.104 [WARNING][5663] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f0ca69d536eeedfb58c75ba117034fcd40a197cb7274707616e93c22b87fdae7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--8--01d4258341-k8s-calico--apiserver--7885d7c4d8--s4bvw-eth0", GenerateName:"calico-apiserver-7885d7c4d8-", Namespace:"calico-system", SelfLink:"", UID:"5e2158f3-f3dc-4de6-bbab-4b66f3e43bd9", ResourceVersion:"967", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 19, 27, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7885d7c4d8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-8-01d4258341", ContainerID:"cadde1ec1ceb3caf5ad550f8c515eedf7597dcfa16c10f050b7c4f1359fe8c8e", Pod:"calico-apiserver-7885d7c4d8-s4bvw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.20.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali4b28647f000", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 19:28:39.143674 containerd[1620]: 2026-04-13 19:28:39.104 [INFO][5663] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="f0ca69d536eeedfb58c75ba117034fcd40a197cb7274707616e93c22b87fdae7" Apr 13 19:28:39.143674 containerd[1620]: 2026-04-13 19:28:39.104 [INFO][5663] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f0ca69d536eeedfb58c75ba117034fcd40a197cb7274707616e93c22b87fdae7" iface="eth0" netns="" Apr 13 19:28:39.143674 containerd[1620]: 2026-04-13 19:28:39.104 [INFO][5663] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="f0ca69d536eeedfb58c75ba117034fcd40a197cb7274707616e93c22b87fdae7" Apr 13 19:28:39.143674 containerd[1620]: 2026-04-13 19:28:39.104 [INFO][5663] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="f0ca69d536eeedfb58c75ba117034fcd40a197cb7274707616e93c22b87fdae7" Apr 13 19:28:39.143674 containerd[1620]: 2026-04-13 19:28:39.125 [INFO][5670] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="f0ca69d536eeedfb58c75ba117034fcd40a197cb7274707616e93c22b87fdae7" HandleID="k8s-pod-network.f0ca69d536eeedfb58c75ba117034fcd40a197cb7274707616e93c22b87fdae7" Workload="ci--4081--3--7--8--01d4258341-k8s-calico--apiserver--7885d7c4d8--s4bvw-eth0" Apr 13 19:28:39.143674 containerd[1620]: 2026-04-13 19:28:39.125 [INFO][5670] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 19:28:39.143674 containerd[1620]: 2026-04-13 19:28:39.126 [INFO][5670] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 19:28:39.143674 containerd[1620]: 2026-04-13 19:28:39.137 [WARNING][5670] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="f0ca69d536eeedfb58c75ba117034fcd40a197cb7274707616e93c22b87fdae7" HandleID="k8s-pod-network.f0ca69d536eeedfb58c75ba117034fcd40a197cb7274707616e93c22b87fdae7" Workload="ci--4081--3--7--8--01d4258341-k8s-calico--apiserver--7885d7c4d8--s4bvw-eth0" Apr 13 19:28:39.143674 containerd[1620]: 2026-04-13 19:28:39.137 [INFO][5670] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="f0ca69d536eeedfb58c75ba117034fcd40a197cb7274707616e93c22b87fdae7" HandleID="k8s-pod-network.f0ca69d536eeedfb58c75ba117034fcd40a197cb7274707616e93c22b87fdae7" Workload="ci--4081--3--7--8--01d4258341-k8s-calico--apiserver--7885d7c4d8--s4bvw-eth0" Apr 13 19:28:39.143674 containerd[1620]: 2026-04-13 19:28:39.139 [INFO][5670] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 19:28:39.143674 containerd[1620]: 2026-04-13 19:28:39.141 [INFO][5663] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="f0ca69d536eeedfb58c75ba117034fcd40a197cb7274707616e93c22b87fdae7" Apr 13 19:28:39.146462 containerd[1620]: time="2026-04-13T19:28:39.143731770Z" level=info msg="TearDown network for sandbox \"f0ca69d536eeedfb58c75ba117034fcd40a197cb7274707616e93c22b87fdae7\" successfully" Apr 13 19:28:39.146462 containerd[1620]: time="2026-04-13T19:28:39.143766212Z" level=info msg="StopPodSandbox for \"f0ca69d536eeedfb58c75ba117034fcd40a197cb7274707616e93c22b87fdae7\" returns successfully" Apr 13 19:28:39.146462 containerd[1620]: time="2026-04-13T19:28:39.144442812Z" level=info msg="RemovePodSandbox for \"f0ca69d536eeedfb58c75ba117034fcd40a197cb7274707616e93c22b87fdae7\"" Apr 13 19:28:39.146462 containerd[1620]: time="2026-04-13T19:28:39.144479414Z" level=info msg="Forcibly stopping sandbox \"f0ca69d536eeedfb58c75ba117034fcd40a197cb7274707616e93c22b87fdae7\"" Apr 13 19:28:39.238633 containerd[1620]: 2026-04-13 19:28:39.195 [WARNING][5684] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f0ca69d536eeedfb58c75ba117034fcd40a197cb7274707616e93c22b87fdae7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--8--01d4258341-k8s-calico--apiserver--7885d7c4d8--s4bvw-eth0", GenerateName:"calico-apiserver-7885d7c4d8-", Namespace:"calico-system", SelfLink:"", UID:"5e2158f3-f3dc-4de6-bbab-4b66f3e43bd9", ResourceVersion:"967", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 19, 27, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7885d7c4d8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-8-01d4258341", ContainerID:"cadde1ec1ceb3caf5ad550f8c515eedf7597dcfa16c10f050b7c4f1359fe8c8e", Pod:"calico-apiserver-7885d7c4d8-s4bvw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.20.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali4b28647f000", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 19:28:39.238633 containerd[1620]: 2026-04-13 19:28:39.196 [INFO][5684] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="f0ca69d536eeedfb58c75ba117034fcd40a197cb7274707616e93c22b87fdae7" Apr 13 19:28:39.238633 containerd[1620]: 2026-04-13 19:28:39.196 [INFO][5684] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f0ca69d536eeedfb58c75ba117034fcd40a197cb7274707616e93c22b87fdae7" iface="eth0" netns="" Apr 13 19:28:39.238633 containerd[1620]: 2026-04-13 19:28:39.196 [INFO][5684] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="f0ca69d536eeedfb58c75ba117034fcd40a197cb7274707616e93c22b87fdae7" Apr 13 19:28:39.238633 containerd[1620]: 2026-04-13 19:28:39.196 [INFO][5684] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="f0ca69d536eeedfb58c75ba117034fcd40a197cb7274707616e93c22b87fdae7" Apr 13 19:28:39.238633 containerd[1620]: 2026-04-13 19:28:39.218 [INFO][5691] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="f0ca69d536eeedfb58c75ba117034fcd40a197cb7274707616e93c22b87fdae7" HandleID="k8s-pod-network.f0ca69d536eeedfb58c75ba117034fcd40a197cb7274707616e93c22b87fdae7" Workload="ci--4081--3--7--8--01d4258341-k8s-calico--apiserver--7885d7c4d8--s4bvw-eth0" Apr 13 19:28:39.238633 containerd[1620]: 2026-04-13 19:28:39.219 [INFO][5691] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 19:28:39.238633 containerd[1620]: 2026-04-13 19:28:39.219 [INFO][5691] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 19:28:39.238633 containerd[1620]: 2026-04-13 19:28:39.231 [WARNING][5691] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="f0ca69d536eeedfb58c75ba117034fcd40a197cb7274707616e93c22b87fdae7" HandleID="k8s-pod-network.f0ca69d536eeedfb58c75ba117034fcd40a197cb7274707616e93c22b87fdae7" Workload="ci--4081--3--7--8--01d4258341-k8s-calico--apiserver--7885d7c4d8--s4bvw-eth0" Apr 13 19:28:39.238633 containerd[1620]: 2026-04-13 19:28:39.231 [INFO][5691] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="f0ca69d536eeedfb58c75ba117034fcd40a197cb7274707616e93c22b87fdae7" HandleID="k8s-pod-network.f0ca69d536eeedfb58c75ba117034fcd40a197cb7274707616e93c22b87fdae7" Workload="ci--4081--3--7--8--01d4258341-k8s-calico--apiserver--7885d7c4d8--s4bvw-eth0" Apr 13 19:28:39.238633 containerd[1620]: 2026-04-13 19:28:39.234 [INFO][5691] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 19:28:39.238633 containerd[1620]: 2026-04-13 19:28:39.236 [INFO][5684] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="f0ca69d536eeedfb58c75ba117034fcd40a197cb7274707616e93c22b87fdae7" Apr 13 19:28:39.238633 containerd[1620]: time="2026-04-13T19:28:39.238561112Z" level=info msg="TearDown network for sandbox \"f0ca69d536eeedfb58c75ba117034fcd40a197cb7274707616e93c22b87fdae7\" successfully" Apr 13 19:28:39.243895 containerd[1620]: time="2026-04-13T19:28:39.243849108Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f0ca69d536eeedfb58c75ba117034fcd40a197cb7274707616e93c22b87fdae7\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 13 19:28:39.243992 containerd[1620]: time="2026-04-13T19:28:39.243964235Z" level=info msg="RemovePodSandbox \"f0ca69d536eeedfb58c75ba117034fcd40a197cb7274707616e93c22b87fdae7\" returns successfully" Apr 13 19:28:39.244829 containerd[1620]: time="2026-04-13T19:28:39.244611033Z" level=info msg="StopPodSandbox for \"dfa1ba7410df325081917add3ceee636b99e0e631662dae097b86cd8911e83d1\"" Apr 13 19:28:39.334702 containerd[1620]: 2026-04-13 19:28:39.288 [WARNING][5705] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="dfa1ba7410df325081917add3ceee636b99e0e631662dae097b86cd8911e83d1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--8--01d4258341-k8s-goldmane--5b85766d88--7vqgk-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"8159562d-f09d-4214-b84d-626c5db8278b", ResourceVersion:"995", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 19, 27, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-8-01d4258341", ContainerID:"a51f4062842df027cb8905ea2955c3f595feafbb965e746cc05fddff3a6796f7", Pod:"goldmane-5b85766d88-7vqgk", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.20.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali5aa06af23e8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 19:28:39.334702 containerd[1620]: 2026-04-13 19:28:39.289 [INFO][5705] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="dfa1ba7410df325081917add3ceee636b99e0e631662dae097b86cd8911e83d1" Apr 13 19:28:39.334702 containerd[1620]: 2026-04-13 19:28:39.289 [INFO][5705] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="dfa1ba7410df325081917add3ceee636b99e0e631662dae097b86cd8911e83d1" iface="eth0" netns="" Apr 13 19:28:39.334702 containerd[1620]: 2026-04-13 19:28:39.289 [INFO][5705] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="dfa1ba7410df325081917add3ceee636b99e0e631662dae097b86cd8911e83d1" Apr 13 19:28:39.334702 containerd[1620]: 2026-04-13 19:28:39.289 [INFO][5705] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="dfa1ba7410df325081917add3ceee636b99e0e631662dae097b86cd8911e83d1" Apr 13 19:28:39.334702 containerd[1620]: 2026-04-13 19:28:39.314 [INFO][5713] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="dfa1ba7410df325081917add3ceee636b99e0e631662dae097b86cd8911e83d1" HandleID="k8s-pod-network.dfa1ba7410df325081917add3ceee636b99e0e631662dae097b86cd8911e83d1" Workload="ci--4081--3--7--8--01d4258341-k8s-goldmane--5b85766d88--7vqgk-eth0" Apr 13 19:28:39.334702 containerd[1620]: 2026-04-13 19:28:39.315 [INFO][5713] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 19:28:39.334702 containerd[1620]: 2026-04-13 19:28:39.315 [INFO][5713] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 19:28:39.334702 containerd[1620]: 2026-04-13 19:28:39.327 [WARNING][5713] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="dfa1ba7410df325081917add3ceee636b99e0e631662dae097b86cd8911e83d1" HandleID="k8s-pod-network.dfa1ba7410df325081917add3ceee636b99e0e631662dae097b86cd8911e83d1" Workload="ci--4081--3--7--8--01d4258341-k8s-goldmane--5b85766d88--7vqgk-eth0" Apr 13 19:28:39.334702 containerd[1620]: 2026-04-13 19:28:39.328 [INFO][5713] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="dfa1ba7410df325081917add3ceee636b99e0e631662dae097b86cd8911e83d1" HandleID="k8s-pod-network.dfa1ba7410df325081917add3ceee636b99e0e631662dae097b86cd8911e83d1" Workload="ci--4081--3--7--8--01d4258341-k8s-goldmane--5b85766d88--7vqgk-eth0" Apr 13 19:28:39.334702 containerd[1620]: 2026-04-13 19:28:39.330 [INFO][5713] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 19:28:39.334702 containerd[1620]: 2026-04-13 19:28:39.332 [INFO][5705] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="dfa1ba7410df325081917add3ceee636b99e0e631662dae097b86cd8911e83d1" Apr 13 19:28:39.334702 containerd[1620]: time="2026-04-13T19:28:39.334602367Z" level=info msg="TearDown network for sandbox \"dfa1ba7410df325081917add3ceee636b99e0e631662dae097b86cd8911e83d1\" successfully" Apr 13 19:28:39.334702 containerd[1620]: time="2026-04-13T19:28:39.334627088Z" level=info msg="StopPodSandbox for \"dfa1ba7410df325081917add3ceee636b99e0e631662dae097b86cd8911e83d1\" returns successfully" Apr 13 19:28:39.336154 containerd[1620]: time="2026-04-13T19:28:39.335150919Z" level=info msg="RemovePodSandbox for \"dfa1ba7410df325081917add3ceee636b99e0e631662dae097b86cd8911e83d1\"" Apr 13 19:28:39.336154 containerd[1620]: time="2026-04-13T19:28:39.335184961Z" level=info msg="Forcibly stopping sandbox \"dfa1ba7410df325081917add3ceee636b99e0e631662dae097b86cd8911e83d1\"" Apr 13 19:28:39.415732 containerd[1620]: 2026-04-13 19:28:39.375 [WARNING][5727] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="dfa1ba7410df325081917add3ceee636b99e0e631662dae097b86cd8911e83d1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--8--01d4258341-k8s-goldmane--5b85766d88--7vqgk-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"8159562d-f09d-4214-b84d-626c5db8278b", ResourceVersion:"995", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 19, 27, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-8-01d4258341", ContainerID:"a51f4062842df027cb8905ea2955c3f595feafbb965e746cc05fddff3a6796f7", Pod:"goldmane-5b85766d88-7vqgk", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.20.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali5aa06af23e8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 19:28:39.415732 containerd[1620]: 2026-04-13 19:28:39.375 [INFO][5727] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="dfa1ba7410df325081917add3ceee636b99e0e631662dae097b86cd8911e83d1" Apr 13 19:28:39.415732 containerd[1620]: 2026-04-13 19:28:39.375 [INFO][5727] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="dfa1ba7410df325081917add3ceee636b99e0e631662dae097b86cd8911e83d1" iface="eth0" netns="" Apr 13 19:28:39.415732 containerd[1620]: 2026-04-13 19:28:39.375 [INFO][5727] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="dfa1ba7410df325081917add3ceee636b99e0e631662dae097b86cd8911e83d1" Apr 13 19:28:39.415732 containerd[1620]: 2026-04-13 19:28:39.375 [INFO][5727] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="dfa1ba7410df325081917add3ceee636b99e0e631662dae097b86cd8911e83d1" Apr 13 19:28:39.415732 containerd[1620]: 2026-04-13 19:28:39.396 [INFO][5734] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="dfa1ba7410df325081917add3ceee636b99e0e631662dae097b86cd8911e83d1" HandleID="k8s-pod-network.dfa1ba7410df325081917add3ceee636b99e0e631662dae097b86cd8911e83d1" Workload="ci--4081--3--7--8--01d4258341-k8s-goldmane--5b85766d88--7vqgk-eth0" Apr 13 19:28:39.415732 containerd[1620]: 2026-04-13 19:28:39.396 [INFO][5734] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 19:28:39.415732 containerd[1620]: 2026-04-13 19:28:39.396 [INFO][5734] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 19:28:39.415732 containerd[1620]: 2026-04-13 19:28:39.407 [WARNING][5734] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="dfa1ba7410df325081917add3ceee636b99e0e631662dae097b86cd8911e83d1" HandleID="k8s-pod-network.dfa1ba7410df325081917add3ceee636b99e0e631662dae097b86cd8911e83d1" Workload="ci--4081--3--7--8--01d4258341-k8s-goldmane--5b85766d88--7vqgk-eth0" Apr 13 19:28:39.415732 containerd[1620]: 2026-04-13 19:28:39.407 [INFO][5734] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="dfa1ba7410df325081917add3ceee636b99e0e631662dae097b86cd8911e83d1" HandleID="k8s-pod-network.dfa1ba7410df325081917add3ceee636b99e0e631662dae097b86cd8911e83d1" Workload="ci--4081--3--7--8--01d4258341-k8s-goldmane--5b85766d88--7vqgk-eth0" Apr 13 19:28:39.415732 containerd[1620]: 2026-04-13 19:28:39.409 [INFO][5734] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 19:28:39.415732 containerd[1620]: 2026-04-13 19:28:39.413 [INFO][5727] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="dfa1ba7410df325081917add3ceee636b99e0e631662dae097b86cd8911e83d1" Apr 13 19:28:39.415732 containerd[1620]: time="2026-04-13T19:28:39.415553840Z" level=info msg="TearDown network for sandbox \"dfa1ba7410df325081917add3ceee636b99e0e631662dae097b86cd8911e83d1\" successfully" Apr 13 19:28:39.421631 containerd[1620]: time="2026-04-13T19:28:39.421358667Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"dfa1ba7410df325081917add3ceee636b99e0e631662dae097b86cd8911e83d1\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 13 19:28:39.421631 containerd[1620]: time="2026-04-13T19:28:39.421461553Z" level=info msg="RemovePodSandbox \"dfa1ba7410df325081917add3ceee636b99e0e631662dae097b86cd8911e83d1\" returns successfully" Apr 13 19:28:39.422159 containerd[1620]: time="2026-04-13T19:28:39.422131753Z" level=info msg="StopPodSandbox for \"eeadffa11b4c751f5e76c23a1a02279932fff7ab4b1d30bae52ca921272ab763\"" Apr 13 19:28:39.506663 containerd[1620]: 2026-04-13 19:28:39.465 [WARNING][5748] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="eeadffa11b4c751f5e76c23a1a02279932fff7ab4b1d30bae52ca921272ab763" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--8--01d4258341-k8s-coredns--674b8bbfcf--7z59s-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"dd51994b-5cb8-4cf8-81b2-1ad58037a2fe", ResourceVersion:"952", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 19, 27, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-8-01d4258341", ContainerID:"f015d5ec4e7308ddaca9bc5a9cfd277d1bf6294e8d9e52bda3c4feccd27219ce", Pod:"coredns-674b8bbfcf-7z59s", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.20.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calibef3683f27e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 19:28:39.506663 containerd[1620]: 2026-04-13 19:28:39.465 [INFO][5748] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="eeadffa11b4c751f5e76c23a1a02279932fff7ab4b1d30bae52ca921272ab763" Apr 13 19:28:39.506663 containerd[1620]: 2026-04-13 19:28:39.465 [INFO][5748] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="eeadffa11b4c751f5e76c23a1a02279932fff7ab4b1d30bae52ca921272ab763" iface="eth0" netns="" Apr 13 19:28:39.506663 containerd[1620]: 2026-04-13 19:28:39.465 [INFO][5748] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="eeadffa11b4c751f5e76c23a1a02279932fff7ab4b1d30bae52ca921272ab763" Apr 13 19:28:39.506663 containerd[1620]: 2026-04-13 19:28:39.465 [INFO][5748] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="eeadffa11b4c751f5e76c23a1a02279932fff7ab4b1d30bae52ca921272ab763" Apr 13 19:28:39.506663 containerd[1620]: 2026-04-13 19:28:39.488 [INFO][5755] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="eeadffa11b4c751f5e76c23a1a02279932fff7ab4b1d30bae52ca921272ab763" HandleID="k8s-pod-network.eeadffa11b4c751f5e76c23a1a02279932fff7ab4b1d30bae52ca921272ab763" Workload="ci--4081--3--7--8--01d4258341-k8s-coredns--674b8bbfcf--7z59s-eth0" Apr 13 19:28:39.506663 containerd[1620]: 2026-04-13 19:28:39.489 [INFO][5755] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 19:28:39.506663 containerd[1620]: 2026-04-13 19:28:39.489 [INFO][5755] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 19:28:39.506663 containerd[1620]: 2026-04-13 19:28:39.500 [WARNING][5755] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="eeadffa11b4c751f5e76c23a1a02279932fff7ab4b1d30bae52ca921272ab763" HandleID="k8s-pod-network.eeadffa11b4c751f5e76c23a1a02279932fff7ab4b1d30bae52ca921272ab763" Workload="ci--4081--3--7--8--01d4258341-k8s-coredns--674b8bbfcf--7z59s-eth0" Apr 13 19:28:39.506663 containerd[1620]: 2026-04-13 19:28:39.500 [INFO][5755] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="eeadffa11b4c751f5e76c23a1a02279932fff7ab4b1d30bae52ca921272ab763" HandleID="k8s-pod-network.eeadffa11b4c751f5e76c23a1a02279932fff7ab4b1d30bae52ca921272ab763" Workload="ci--4081--3--7--8--01d4258341-k8s-coredns--674b8bbfcf--7z59s-eth0" Apr 13 19:28:39.506663 containerd[1620]: 2026-04-13 19:28:39.502 [INFO][5755] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 19:28:39.506663 containerd[1620]: 2026-04-13 19:28:39.504 [INFO][5748] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="eeadffa11b4c751f5e76c23a1a02279932fff7ab4b1d30bae52ca921272ab763" Apr 13 19:28:39.508515 containerd[1620]: time="2026-04-13T19:28:39.507318399Z" level=info msg="TearDown network for sandbox \"eeadffa11b4c751f5e76c23a1a02279932fff7ab4b1d30bae52ca921272ab763\" successfully" Apr 13 19:28:39.508515 containerd[1620]: time="2026-04-13T19:28:39.507347681Z" level=info msg="StopPodSandbox for \"eeadffa11b4c751f5e76c23a1a02279932fff7ab4b1d30bae52ca921272ab763\" returns successfully" Apr 13 19:28:39.509275 containerd[1620]: time="2026-04-13T19:28:39.508916255Z" level=info msg="RemovePodSandbox for \"eeadffa11b4c751f5e76c23a1a02279932fff7ab4b1d30bae52ca921272ab763\"" Apr 13 19:28:39.509275 containerd[1620]: time="2026-04-13T19:28:39.508973658Z" level=info msg="Forcibly stopping sandbox \"eeadffa11b4c751f5e76c23a1a02279932fff7ab4b1d30bae52ca921272ab763\"" Apr 13 19:28:39.609907 containerd[1620]: 2026-04-13 19:28:39.552 [WARNING][5770] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="eeadffa11b4c751f5e76c23a1a02279932fff7ab4b1d30bae52ca921272ab763" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--8--01d4258341-k8s-coredns--674b8bbfcf--7z59s-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"dd51994b-5cb8-4cf8-81b2-1ad58037a2fe", ResourceVersion:"952", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 19, 27, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-8-01d4258341", ContainerID:"f015d5ec4e7308ddaca9bc5a9cfd277d1bf6294e8d9e52bda3c4feccd27219ce", Pod:"coredns-674b8bbfcf-7z59s", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.20.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calibef3683f27e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 19:28:39.609907 containerd[1620]: 2026-04-13 19:28:39.552 [INFO][5770] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="eeadffa11b4c751f5e76c23a1a02279932fff7ab4b1d30bae52ca921272ab763" Apr 13 19:28:39.609907 containerd[1620]: 2026-04-13 19:28:39.553 [INFO][5770] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="eeadffa11b4c751f5e76c23a1a02279932fff7ab4b1d30bae52ca921272ab763" iface="eth0" netns="" Apr 13 19:28:39.609907 containerd[1620]: 2026-04-13 19:28:39.553 [INFO][5770] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="eeadffa11b4c751f5e76c23a1a02279932fff7ab4b1d30bae52ca921272ab763" Apr 13 19:28:39.609907 containerd[1620]: 2026-04-13 19:28:39.553 [INFO][5770] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="eeadffa11b4c751f5e76c23a1a02279932fff7ab4b1d30bae52ca921272ab763" Apr 13 19:28:39.609907 containerd[1620]: 2026-04-13 19:28:39.577 [INFO][5778] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="eeadffa11b4c751f5e76c23a1a02279932fff7ab4b1d30bae52ca921272ab763" HandleID="k8s-pod-network.eeadffa11b4c751f5e76c23a1a02279932fff7ab4b1d30bae52ca921272ab763" Workload="ci--4081--3--7--8--01d4258341-k8s-coredns--674b8bbfcf--7z59s-eth0" Apr 13 19:28:39.609907 containerd[1620]: 2026-04-13 19:28:39.577 [INFO][5778] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 19:28:39.609907 containerd[1620]: 2026-04-13 19:28:39.577 [INFO][5778] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 19:28:39.609907 containerd[1620]: 2026-04-13 19:28:39.592 [WARNING][5778] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="eeadffa11b4c751f5e76c23a1a02279932fff7ab4b1d30bae52ca921272ab763" HandleID="k8s-pod-network.eeadffa11b4c751f5e76c23a1a02279932fff7ab4b1d30bae52ca921272ab763" Workload="ci--4081--3--7--8--01d4258341-k8s-coredns--674b8bbfcf--7z59s-eth0" Apr 13 19:28:39.609907 containerd[1620]: 2026-04-13 19:28:39.592 [INFO][5778] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="eeadffa11b4c751f5e76c23a1a02279932fff7ab4b1d30bae52ca921272ab763" HandleID="k8s-pod-network.eeadffa11b4c751f5e76c23a1a02279932fff7ab4b1d30bae52ca921272ab763" Workload="ci--4081--3--7--8--01d4258341-k8s-coredns--674b8bbfcf--7z59s-eth0" Apr 13 19:28:39.609907 containerd[1620]: 2026-04-13 19:28:39.601 [INFO][5778] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 19:28:39.609907 containerd[1620]: 2026-04-13 19:28:39.607 [INFO][5770] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="eeadffa11b4c751f5e76c23a1a02279932fff7ab4b1d30bae52ca921272ab763" Apr 13 19:28:39.610911 containerd[1620]: time="2026-04-13T19:28:39.610460638Z" level=info msg="TearDown network for sandbox \"eeadffa11b4c751f5e76c23a1a02279932fff7ab4b1d30bae52ca921272ab763\" successfully" Apr 13 19:28:39.614238 containerd[1620]: time="2026-04-13T19:28:39.614201981Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"eeadffa11b4c751f5e76c23a1a02279932fff7ab4b1d30bae52ca921272ab763\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 13 19:28:39.614427 containerd[1620]: time="2026-04-13T19:28:39.614407194Z" level=info msg="RemovePodSandbox \"eeadffa11b4c751f5e76c23a1a02279932fff7ab4b1d30bae52ca921272ab763\" returns successfully" Apr 13 19:28:47.825927 kubelet[2698]: I0413 19:28:47.825583 2698 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 13 19:28:51.590876 systemd[1]: run-containerd-runc-k8s.io-680aabcd56ed783064a4f781ce7a232a3a2e240942007d0727fe540ac8145f1b-runc.I0JXZw.mount: Deactivated successfully. Apr 13 19:29:02.221670 kubelet[2698]: I0413 19:29:02.221162 2698 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 13 19:29:14.456907 systemd[1]: run-containerd-runc-k8s.io-da1bbe2d1c92643c92a5b11db6ffdfe21df82370889164033c5f873ed3dc2c6e-runc.nyrEE0.mount: Deactivated successfully. Apr 13 19:29:21.590991 systemd[1]: run-containerd-runc-k8s.io-680aabcd56ed783064a4f781ce7a232a3a2e240942007d0727fe540ac8145f1b-runc.ErbAjO.mount: Deactivated successfully. Apr 13 19:29:28.962002 update_engine[1588]: I20260413 19:29:28.961884 1588 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Apr 13 19:29:28.962659 update_engine[1588]: I20260413 19:29:28.962011 1588 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Apr 13 19:29:28.962659 update_engine[1588]: I20260413 19:29:28.962371 1588 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Apr 13 19:29:28.963896 update_engine[1588]: I20260413 19:29:28.963493 1588 omaha_request_params.cc:62] Current group set to lts Apr 13 19:29:28.963896 update_engine[1588]: I20260413 19:29:28.963651 1588 update_attempter.cc:499] Already updated boot flags. Skipping. Apr 13 19:29:28.963896 update_engine[1588]: I20260413 19:29:28.963690 1588 update_attempter.cc:643] Scheduling an action processor start. Apr 13 19:29:28.963896 update_engine[1588]: I20260413 19:29:28.963719 1588 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Apr 13 19:29:28.970556 update_engine[1588]: I20260413 19:29:28.965087 1588 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Apr 13 19:29:28.970556 update_engine[1588]: I20260413 19:29:28.965219 1588 omaha_request_action.cc:271] Posting an Omaha request to disabled Apr 13 19:29:28.970556 update_engine[1588]: I20260413 19:29:28.965235 1588 omaha_request_action.cc:272] Request: Apr 13 19:29:28.970556 update_engine[1588]: Apr 13 19:29:28.970556 update_engine[1588]: Apr 13 19:29:28.970556 update_engine[1588]: Apr 13 19:29:28.970556 update_engine[1588]: Apr 13 19:29:28.970556 update_engine[1588]: Apr 13 19:29:28.970556 update_engine[1588]: Apr 13 19:29:28.970556 update_engine[1588]: Apr 13 19:29:28.970556 update_engine[1588]: Apr 13 19:29:28.970556 update_engine[1588]: I20260413 19:29:28.965246 1588 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 13 19:29:28.974029 update_engine[1588]: I20260413 19:29:28.972143 1588 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 13 19:29:28.974029 update_engine[1588]: I20260413 19:29:28.972514 1588 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 13 19:29:28.974318 locksmithd[1639]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Apr 13 19:29:28.974564 update_engine[1588]: E20260413 19:29:28.974412 1588 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 13 19:29:28.974564 update_engine[1588]: I20260413 19:29:28.974476 1588 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Apr 13 19:29:38.875838 update_engine[1588]: I20260413 19:29:38.875700 1588 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 13 19:29:38.876525 update_engine[1588]: I20260413 19:29:38.876144 1588 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 13 19:29:38.876525 update_engine[1588]: I20260413 19:29:38.876415 1588 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 13 19:29:38.877343 update_engine[1588]: E20260413 19:29:38.877280 1588 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 13 19:29:38.877431 update_engine[1588]: I20260413 19:29:38.877367 1588 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Apr 13 19:29:48.875971 update_engine[1588]: I20260413 19:29:48.875835 1588 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 13 19:29:48.876821 update_engine[1588]: I20260413 19:29:48.876305 1588 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 13 19:29:48.876821 update_engine[1588]: I20260413 19:29:48.876605 1588 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 13 19:29:48.877794 update_engine[1588]: E20260413 19:29:48.877681 1588 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 13 19:29:48.877867 update_engine[1588]: I20260413 19:29:48.877833 1588 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Apr 13 19:29:58.874647 update_engine[1588]: I20260413 19:29:58.874524 1588 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 13 19:29:58.875301 update_engine[1588]: I20260413 19:29:58.874925 1588 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 13 19:29:58.875380 update_engine[1588]: I20260413 19:29:58.875295 1588 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 13 19:29:58.876810 update_engine[1588]: E20260413 19:29:58.876346 1588 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 13 19:29:58.876810 update_engine[1588]: I20260413 19:29:58.876426 1588 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Apr 13 19:29:58.876810 update_engine[1588]: I20260413 19:29:58.876447 1588 omaha_request_action.cc:617] Omaha request response: Apr 13 19:29:58.876810 update_engine[1588]: E20260413 19:29:58.876555 1588 omaha_request_action.cc:636] Omaha request network transfer failed. Apr 13 19:29:58.876810 update_engine[1588]: I20260413 19:29:58.876581 1588 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Apr 13 19:29:58.876810 update_engine[1588]: I20260413 19:29:58.876591 1588 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Apr 13 19:29:58.876810 update_engine[1588]: I20260413 19:29:58.876605 1588 update_attempter.cc:306] Processing Done. Apr 13 19:29:58.876810 update_engine[1588]: E20260413 19:29:58.876623 1588 update_attempter.cc:619] Update failed. Apr 13 19:29:58.876810 update_engine[1588]: I20260413 19:29:58.876632 1588 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Apr 13 19:29:58.876810 update_engine[1588]: I20260413 19:29:58.876641 1588 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Apr 13 19:29:58.876810 update_engine[1588]: I20260413 19:29:58.876648 1588 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Apr 13 19:29:58.878080 update_engine[1588]: I20260413 19:29:58.877365 1588 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Apr 13 19:29:58.878080 update_engine[1588]: I20260413 19:29:58.877434 1588 omaha_request_action.cc:271] Posting an Omaha request to disabled Apr 13 19:29:58.878080 update_engine[1588]: I20260413 19:29:58.877451 1588 omaha_request_action.cc:272] Request: Apr 13 19:29:58.878080 update_engine[1588]: Apr 13 19:29:58.878080 update_engine[1588]: Apr 13 19:29:58.878080 update_engine[1588]: Apr 13 19:29:58.878080 update_engine[1588]: Apr 13 19:29:58.878080 update_engine[1588]: Apr 13 19:29:58.878080 update_engine[1588]: Apr 13 19:29:58.878080 update_engine[1588]: I20260413 19:29:58.877462 1588 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 13 19:29:58.878080 update_engine[1588]: I20260413 19:29:58.877677 1588 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 13 19:29:58.879209 locksmithd[1639]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Apr 13 19:29:58.879680 update_engine[1588]: I20260413 19:29:58.878816 1588 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 13 19:29:58.880486 update_engine[1588]: E20260413 19:29:58.880182 1588 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 13 19:29:58.880486 update_engine[1588]: I20260413 19:29:58.880282 1588 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Apr 13 19:29:58.880486 update_engine[1588]: I20260413 19:29:58.880298 1588 omaha_request_action.cc:617] Omaha request response: Apr 13 19:29:58.880486 update_engine[1588]: I20260413 19:29:58.880308 1588 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Apr 13 19:29:58.880486 update_engine[1588]: I20260413 19:29:58.880316 1588 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Apr 13 19:29:58.880486 update_engine[1588]: I20260413 19:29:58.880324 1588 update_attempter.cc:306] Processing Done. Apr 13 19:29:58.880486 update_engine[1588]: I20260413 19:29:58.880334 1588 update_attempter.cc:310] Error event sent. Apr 13 19:29:58.880486 update_engine[1588]: I20260413 19:29:58.880347 1588 update_check_scheduler.cc:74] Next update check in 42m11s Apr 13 19:29:58.880732 locksmithd[1639]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Apr 13 19:30:01.913213 systemd[1]: Started sshd@7-49.13.63.18:22-50.85.169.122:60916.service - OpenSSH per-connection server daemon (50.85.169.122:60916). Apr 13 19:30:02.039968 sshd[6102]: Accepted publickey for core from 50.85.169.122 port 60916 ssh2: RSA SHA256:iZ69s7jdfZeZWl77uzTdj7kYKrt9+aDLOIz6i/Hnoms Apr 13 19:30:02.043751 sshd[6102]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:30:02.049447 systemd-logind[1586]: New session 8 of user core. Apr 13 19:30:02.058279 systemd[1]: Started session-8.scope - Session 8 of User core. Apr 13 19:30:02.325962 sshd[6102]: pam_unix(sshd:session): session closed for user core Apr 13 19:30:02.331167 systemd[1]: sshd@7-49.13.63.18:22-50.85.169.122:60916.service: Deactivated successfully. Apr 13 19:30:02.338282 systemd[1]: session-8.scope: Deactivated successfully. Apr 13 19:30:02.339581 systemd-logind[1586]: Session 8 logged out. Waiting for processes to exit. Apr 13 19:30:02.340841 systemd-logind[1586]: Removed session 8. Apr 13 19:30:07.347736 systemd[1]: Started sshd@8-49.13.63.18:22-50.85.169.122:60920.service - OpenSSH per-connection server daemon (50.85.169.122:60920). Apr 13 19:30:07.462979 sshd[6117]: Accepted publickey for core from 50.85.169.122 port 60920 ssh2: RSA SHA256:iZ69s7jdfZeZWl77uzTdj7kYKrt9+aDLOIz6i/Hnoms Apr 13 19:30:07.464827 sshd[6117]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:30:07.470788 systemd-logind[1586]: New session 9 of user core. Apr 13 19:30:07.472289 systemd[1]: Started session-9.scope - Session 9 of User core. Apr 13 19:30:07.682319 sshd[6117]: pam_unix(sshd:session): session closed for user core Apr 13 19:30:07.688322 systemd-logind[1586]: Session 9 logged out. Waiting for processes to exit. Apr 13 19:30:07.688877 systemd[1]: sshd@8-49.13.63.18:22-50.85.169.122:60920.service: Deactivated successfully. Apr 13 19:30:07.692948 systemd[1]: session-9.scope: Deactivated successfully. Apr 13 19:30:07.694712 systemd-logind[1586]: Removed session 9. Apr 13 19:30:12.710407 systemd[1]: Started sshd@9-49.13.63.18:22-50.85.169.122:53346.service - OpenSSH per-connection server daemon (50.85.169.122:53346). Apr 13 19:30:12.830712 sshd[6131]: Accepted publickey for core from 50.85.169.122 port 53346 ssh2: RSA SHA256:iZ69s7jdfZeZWl77uzTdj7kYKrt9+aDLOIz6i/Hnoms Apr 13 19:30:12.832619 sshd[6131]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:30:12.838543 systemd-logind[1586]: New session 10 of user core. Apr 13 19:30:12.841324 systemd[1]: Started session-10.scope - Session 10 of User core. Apr 13 19:30:13.043125 sshd[6131]: pam_unix(sshd:session): session closed for user core Apr 13 19:30:13.050721 systemd-logind[1586]: Session 10 logged out. Waiting for processes to exit. Apr 13 19:30:13.051380 systemd[1]: sshd@9-49.13.63.18:22-50.85.169.122:53346.service: Deactivated successfully. Apr 13 19:30:13.055263 systemd[1]: session-10.scope: Deactivated successfully. Apr 13 19:30:13.057835 systemd-logind[1586]: Removed session 10. Apr 13 19:30:18.068495 systemd[1]: Started sshd@10-49.13.63.18:22-50.85.169.122:53348.service - OpenSSH per-connection server daemon (50.85.169.122:53348). Apr 13 19:30:18.194086 sshd[6169]: Accepted publickey for core from 50.85.169.122 port 53348 ssh2: RSA SHA256:iZ69s7jdfZeZWl77uzTdj7kYKrt9+aDLOIz6i/Hnoms Apr 13 19:30:18.196685 sshd[6169]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:30:18.203868 systemd-logind[1586]: New session 11 of user core. Apr 13 19:30:18.211399 systemd[1]: Started session-11.scope - Session 11 of User core. Apr 13 19:30:18.418296 sshd[6169]: pam_unix(sshd:session): session closed for user core Apr 13 19:30:18.423485 systemd[1]: sshd@10-49.13.63.18:22-50.85.169.122:53348.service: Deactivated successfully. Apr 13 19:30:18.428952 systemd[1]: session-11.scope: Deactivated successfully. Apr 13 19:30:18.430118 systemd-logind[1586]: Session 11 logged out. Waiting for processes to exit. Apr 13 19:30:18.431132 systemd-logind[1586]: Removed session 11. Apr 13 19:30:23.451599 systemd[1]: Started sshd@11-49.13.63.18:22-50.85.169.122:49086.service - OpenSSH per-connection server daemon (50.85.169.122:49086). Apr 13 19:30:23.565189 sshd[6225]: Accepted publickey for core from 50.85.169.122 port 49086 ssh2: RSA SHA256:iZ69s7jdfZeZWl77uzTdj7kYKrt9+aDLOIz6i/Hnoms Apr 13 19:30:23.568262 sshd[6225]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:30:23.577905 systemd-logind[1586]: New session 12 of user core. Apr 13 19:30:23.582332 systemd[1]: Started session-12.scope - Session 12 of User core. Apr 13 19:30:23.791896 sshd[6225]: pam_unix(sshd:session): session closed for user core Apr 13 19:30:23.798354 systemd-logind[1586]: Session 12 logged out. Waiting for processes to exit. Apr 13 19:30:23.798798 systemd[1]: sshd@11-49.13.63.18:22-50.85.169.122:49086.service: Deactivated successfully. Apr 13 19:30:23.804085 systemd[1]: session-12.scope: Deactivated successfully. Apr 13 19:30:23.815359 systemd[1]: Started sshd@12-49.13.63.18:22-50.85.169.122:49098.service - OpenSSH per-connection server daemon (50.85.169.122:49098). Apr 13 19:30:23.817580 systemd-logind[1586]: Removed session 12. Apr 13 19:30:23.933115 sshd[6239]: Accepted publickey for core from 50.85.169.122 port 49098 ssh2: RSA SHA256:iZ69s7jdfZeZWl77uzTdj7kYKrt9+aDLOIz6i/Hnoms Apr 13 19:30:23.937698 sshd[6239]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:30:23.942407 systemd-logind[1586]: New session 13 of user core. Apr 13 19:30:23.950372 systemd[1]: Started session-13.scope - Session 13 of User core. Apr 13 19:30:24.214499 sshd[6239]: pam_unix(sshd:session): session closed for user core Apr 13 19:30:24.224217 systemd[1]: sshd@12-49.13.63.18:22-50.85.169.122:49098.service: Deactivated successfully. Apr 13 19:30:24.232036 systemd-logind[1586]: Session 13 logged out. Waiting for processes to exit. Apr 13 19:30:24.235259 systemd[1]: session-13.scope: Deactivated successfully. Apr 13 19:30:24.246262 systemd[1]: Started sshd@13-49.13.63.18:22-50.85.169.122:49100.service - OpenSSH per-connection server daemon (50.85.169.122:49100). Apr 13 19:30:24.248385 systemd-logind[1586]: Removed session 13. Apr 13 19:30:24.369102 sshd[6251]: Accepted publickey for core from 50.85.169.122 port 49100 ssh2: RSA SHA256:iZ69s7jdfZeZWl77uzTdj7kYKrt9+aDLOIz6i/Hnoms Apr 13 19:30:24.372985 sshd[6251]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:30:24.383641 systemd-logind[1586]: New session 14 of user core. Apr 13 19:30:24.391449 systemd[1]: Started session-14.scope - Session 14 of User core. Apr 13 19:30:24.599329 sshd[6251]: pam_unix(sshd:session): session closed for user core Apr 13 19:30:24.607004 systemd[1]: sshd@13-49.13.63.18:22-50.85.169.122:49100.service: Deactivated successfully. Apr 13 19:30:24.611107 systemd[1]: session-14.scope: Deactivated successfully. Apr 13 19:30:24.612110 systemd-logind[1586]: Session 14 logged out. Waiting for processes to exit. Apr 13 19:30:24.613637 systemd-logind[1586]: Removed session 14. Apr 13 19:30:29.624236 systemd[1]: Started sshd@14-49.13.63.18:22-50.85.169.122:52304.service - OpenSSH per-connection server daemon (50.85.169.122:52304). Apr 13 19:30:29.745911 sshd[6324]: Accepted publickey for core from 50.85.169.122 port 52304 ssh2: RSA SHA256:iZ69s7jdfZeZWl77uzTdj7kYKrt9+aDLOIz6i/Hnoms Apr 13 19:30:29.747680 sshd[6324]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:30:29.753216 systemd-logind[1586]: New session 15 of user core. Apr 13 19:30:29.757241 systemd[1]: Started session-15.scope - Session 15 of User core. Apr 13 19:30:29.950076 sshd[6324]: pam_unix(sshd:session): session closed for user core Apr 13 19:30:29.955604 systemd-logind[1586]: Session 15 logged out. Waiting for processes to exit. Apr 13 19:30:29.956508 systemd[1]: sshd@14-49.13.63.18:22-50.85.169.122:52304.service: Deactivated successfully. Apr 13 19:30:29.963232 systemd[1]: session-15.scope: Deactivated successfully. Apr 13 19:30:29.964754 systemd-logind[1586]: Removed session 15. Apr 13 19:30:34.976371 systemd[1]: Started sshd@15-49.13.63.18:22-50.85.169.122:52320.service - OpenSSH per-connection server daemon (50.85.169.122:52320). Apr 13 19:30:35.102438 sshd[6338]: Accepted publickey for core from 50.85.169.122 port 52320 ssh2: RSA SHA256:iZ69s7jdfZeZWl77uzTdj7kYKrt9+aDLOIz6i/Hnoms Apr 13 19:30:35.103333 sshd[6338]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:30:35.108869 systemd-logind[1586]: New session 16 of user core. Apr 13 19:30:35.114200 systemd[1]: Started session-16.scope - Session 16 of User core. Apr 13 19:30:35.316277 sshd[6338]: pam_unix(sshd:session): session closed for user core Apr 13 19:30:35.322369 systemd[1]: sshd@15-49.13.63.18:22-50.85.169.122:52320.service: Deactivated successfully. Apr 13 19:30:35.325964 systemd-logind[1586]: Session 16 logged out. Waiting for processes to exit. Apr 13 19:30:35.326500 systemd[1]: session-16.scope: Deactivated successfully. Apr 13 19:30:35.328088 systemd-logind[1586]: Removed session 16. Apr 13 19:30:35.341877 systemd[1]: Started sshd@16-49.13.63.18:22-50.85.169.122:52328.service - OpenSSH per-connection server daemon (50.85.169.122:52328). Apr 13 19:30:35.469223 sshd[6352]: Accepted publickey for core from 50.85.169.122 port 52328 ssh2: RSA SHA256:iZ69s7jdfZeZWl77uzTdj7kYKrt9+aDLOIz6i/Hnoms Apr 13 19:30:35.470873 sshd[6352]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:30:35.478073 systemd-logind[1586]: New session 17 of user core. Apr 13 19:30:35.483592 systemd[1]: Started session-17.scope - Session 17 of User core. Apr 13 19:30:35.862274 sshd[6352]: pam_unix(sshd:session): session closed for user core Apr 13 19:30:35.869547 systemd-logind[1586]: Session 17 logged out. Waiting for processes to exit. Apr 13 19:30:35.873210 systemd[1]: sshd@16-49.13.63.18:22-50.85.169.122:52328.service: Deactivated successfully. Apr 13 19:30:35.883271 systemd[1]: session-17.scope: Deactivated successfully. Apr 13 19:30:35.899281 systemd[1]: Started sshd@17-49.13.63.18:22-50.85.169.122:52336.service - OpenSSH per-connection server daemon (50.85.169.122:52336). Apr 13 19:30:35.900082 systemd-logind[1586]: Removed session 17. Apr 13 19:30:36.048494 sshd[6364]: Accepted publickey for core from 50.85.169.122 port 52336 ssh2: RSA SHA256:iZ69s7jdfZeZWl77uzTdj7kYKrt9+aDLOIz6i/Hnoms Apr 13 19:30:36.050572 sshd[6364]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:30:36.061481 systemd-logind[1586]: New session 18 of user core. Apr 13 19:30:36.066286 systemd[1]: Started session-18.scope - Session 18 of User core. Apr 13 19:30:36.974597 sshd[6364]: pam_unix(sshd:session): session closed for user core Apr 13 19:30:36.984465 systemd-logind[1586]: Session 18 logged out. Waiting for processes to exit. Apr 13 19:30:36.988326 systemd[1]: sshd@17-49.13.63.18:22-50.85.169.122:52336.service: Deactivated successfully. Apr 13 19:30:37.001375 systemd[1]: session-18.scope: Deactivated successfully. Apr 13 19:30:37.011235 systemd[1]: Started sshd@18-49.13.63.18:22-50.85.169.122:52350.service - OpenSSH per-connection server daemon (50.85.169.122:52350). Apr 13 19:30:37.015161 systemd-logind[1586]: Removed session 18. Apr 13 19:30:37.146146 sshd[6394]: Accepted publickey for core from 50.85.169.122 port 52350 ssh2: RSA SHA256:iZ69s7jdfZeZWl77uzTdj7kYKrt9+aDLOIz6i/Hnoms Apr 13 19:30:37.148911 sshd[6394]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:30:37.155200 systemd-logind[1586]: New session 19 of user core. Apr 13 19:30:37.166023 systemd[1]: Started session-19.scope - Session 19 of User core. Apr 13 19:30:37.578642 sshd[6394]: pam_unix(sshd:session): session closed for user core Apr 13 19:30:37.586741 systemd-logind[1586]: Session 19 logged out. Waiting for processes to exit. Apr 13 19:30:37.589580 systemd[1]: sshd@18-49.13.63.18:22-50.85.169.122:52350.service: Deactivated successfully. Apr 13 19:30:37.593900 systemd[1]: session-19.scope: Deactivated successfully. Apr 13 19:30:37.597925 systemd-logind[1586]: Removed session 19. Apr 13 19:30:37.605985 systemd[1]: Started sshd@19-49.13.63.18:22-50.85.169.122:52366.service - OpenSSH per-connection server daemon (50.85.169.122:52366). Apr 13 19:30:37.724984 sshd[6405]: Accepted publickey for core from 50.85.169.122 port 52366 ssh2: RSA SHA256:iZ69s7jdfZeZWl77uzTdj7kYKrt9+aDLOIz6i/Hnoms Apr 13 19:30:37.726825 sshd[6405]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:30:37.733572 systemd-logind[1586]: New session 20 of user core. Apr 13 19:30:37.737293 systemd[1]: Started session-20.scope - Session 20 of User core. Apr 13 19:30:37.929702 sshd[6405]: pam_unix(sshd:session): session closed for user core Apr 13 19:30:37.933089 systemd[1]: sshd@19-49.13.63.18:22-50.85.169.122:52366.service: Deactivated successfully. Apr 13 19:30:37.933615 systemd-logind[1586]: Session 20 logged out. Waiting for processes to exit. Apr 13 19:30:37.939256 systemd[1]: session-20.scope: Deactivated successfully. Apr 13 19:30:37.941713 systemd-logind[1586]: Removed session 20. Apr 13 19:30:42.957340 systemd[1]: Started sshd@20-49.13.63.18:22-50.85.169.122:58212.service - OpenSSH per-connection server daemon (50.85.169.122:58212). Apr 13 19:30:43.075993 sshd[6423]: Accepted publickey for core from 50.85.169.122 port 58212 ssh2: RSA SHA256:iZ69s7jdfZeZWl77uzTdj7kYKrt9+aDLOIz6i/Hnoms Apr 13 19:30:43.078330 sshd[6423]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:30:43.083924 systemd-logind[1586]: New session 21 of user core. Apr 13 19:30:43.092601 systemd[1]: Started session-21.scope - Session 21 of User core. Apr 13 19:30:43.278829 sshd[6423]: pam_unix(sshd:session): session closed for user core Apr 13 19:30:43.285017 systemd-logind[1586]: Session 21 logged out. Waiting for processes to exit. Apr 13 19:30:43.286298 systemd[1]: sshd@20-49.13.63.18:22-50.85.169.122:58212.service: Deactivated successfully. Apr 13 19:30:43.292137 systemd[1]: session-21.scope: Deactivated successfully. Apr 13 19:30:43.293146 systemd-logind[1586]: Removed session 21. Apr 13 19:30:48.309841 systemd[1]: Started sshd@21-49.13.63.18:22-50.85.169.122:58226.service - OpenSSH per-connection server daemon (50.85.169.122:58226). Apr 13 19:30:48.431055 sshd[6461]: Accepted publickey for core from 50.85.169.122 port 58226 ssh2: RSA SHA256:iZ69s7jdfZeZWl77uzTdj7kYKrt9+aDLOIz6i/Hnoms Apr 13 19:30:48.432805 sshd[6461]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:30:48.440407 systemd-logind[1586]: New session 22 of user core. Apr 13 19:30:48.443697 systemd[1]: Started session-22.scope - Session 22 of User core. Apr 13 19:30:48.647171 sshd[6461]: pam_unix(sshd:session): session closed for user core Apr 13 19:30:48.655094 systemd[1]: sshd@21-49.13.63.18:22-50.85.169.122:58226.service: Deactivated successfully. Apr 13 19:30:48.659876 systemd[1]: session-22.scope: Deactivated successfully. Apr 13 19:30:48.662208 systemd-logind[1586]: Session 22 logged out. Waiting for processes to exit. Apr 13 19:30:48.663558 systemd-logind[1586]: Removed session 22. Apr 13 19:31:03.649453 containerd[1620]: time="2026-04-13T19:31:03.649363983Z" level=info msg="shim disconnected" id=377e7831635617bd92f8a0062e35a82d3bddb7d95533bfae8e7241c6b6ad3270 namespace=k8s.io Apr 13 19:31:03.649453 containerd[1620]: time="2026-04-13T19:31:03.649433741Z" level=warning msg="cleaning up after shim disconnected" id=377e7831635617bd92f8a0062e35a82d3bddb7d95533bfae8e7241c6b6ad3270 namespace=k8s.io Apr 13 19:31:03.649453 containerd[1620]: time="2026-04-13T19:31:03.649444061Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 13 19:31:03.651895 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-377e7831635617bd92f8a0062e35a82d3bddb7d95533bfae8e7241c6b6ad3270-rootfs.mount: Deactivated successfully. Apr 13 19:31:03.944358 containerd[1620]: time="2026-04-13T19:31:03.943312193Z" level=info msg="shim disconnected" id=d1819cdfa16062a70b56e7c1cf1661b04acfce9783619495e9f043b1b18cbbea namespace=k8s.io Apr 13 19:31:03.944358 containerd[1620]: time="2026-04-13T19:31:03.943512548Z" level=warning msg="cleaning up after shim disconnected" id=d1819cdfa16062a70b56e7c1cf1661b04acfce9783619495e9f043b1b18cbbea namespace=k8s.io Apr 13 19:31:03.944358 containerd[1620]: time="2026-04-13T19:31:03.943530027Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 13 19:31:03.948658 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d1819cdfa16062a70b56e7c1cf1661b04acfce9783619495e9f043b1b18cbbea-rootfs.mount: Deactivated successfully. Apr 13 19:31:03.957293 kubelet[2698]: E0413 19:31:03.956825 2698 controller.go:195] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:56600->10.0.0.2:2379: read: connection timed out" Apr 13 19:31:03.968679 containerd[1620]: time="2026-04-13T19:31:03.968290281Z" level=warning msg="cleanup warnings time=\"2026-04-13T19:31:03Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Apr 13 19:31:03.995964 containerd[1620]: time="2026-04-13T19:31:03.992139039Z" level=info msg="shim disconnected" id=02db44b84dbcaba3e74d53542f3415ba2294aa660d8bb79a66e1c7a5c4820dfd namespace=k8s.io Apr 13 19:31:03.995964 containerd[1620]: time="2026-04-13T19:31:03.992201677Z" level=warning msg="cleaning up after shim disconnected" id=02db44b84dbcaba3e74d53542f3415ba2294aa660d8bb79a66e1c7a5c4820dfd namespace=k8s.io Apr 13 19:31:03.995964 containerd[1620]: time="2026-04-13T19:31:03.992210997Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 13 19:31:03.997036 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-02db44b84dbcaba3e74d53542f3415ba2294aa660d8bb79a66e1c7a5c4820dfd-rootfs.mount: Deactivated successfully. Apr 13 19:31:04.065418 kubelet[2698]: I0413 19:31:04.065082 2698 scope.go:117] "RemoveContainer" containerID="02db44b84dbcaba3e74d53542f3415ba2294aa660d8bb79a66e1c7a5c4820dfd" Apr 13 19:31:04.065418 kubelet[2698]: I0413 19:31:04.065157 2698 scope.go:117] "RemoveContainer" containerID="377e7831635617bd92f8a0062e35a82d3bddb7d95533bfae8e7241c6b6ad3270" Apr 13 19:31:04.065930 kubelet[2698]: I0413 19:31:04.065811 2698 scope.go:117] "RemoveContainer" containerID="d1819cdfa16062a70b56e7c1cf1661b04acfce9783619495e9f043b1b18cbbea" Apr 13 19:31:04.070416 containerd[1620]: time="2026-04-13T19:31:04.070067924Z" level=info msg="CreateContainer within sandbox \"aa87fc31d88b1a43b8aa84cb121a5a70d20257497ef54d6c836437ef8441203a\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Apr 13 19:31:04.070416 containerd[1620]: time="2026-04-13T19:31:04.070317278Z" level=info msg="CreateContainer within sandbox \"b67892528463124b0c131108094802685d27ebf2d808d2e0925916001ade6e72\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Apr 13 19:31:04.072683 containerd[1620]: time="2026-04-13T19:31:04.071932036Z" level=info msg="CreateContainer within sandbox \"57fda3b9c98006334080e1bc22ddd4f9066d08bbb1fa434d7c4c2e9fde21ca54\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Apr 13 19:31:04.095826 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3036703686.mount: Deactivated successfully. Apr 13 19:31:04.097760 containerd[1620]: time="2026-04-13T19:31:04.097000304Z" level=info msg="CreateContainer within sandbox \"b67892528463124b0c131108094802685d27ebf2d808d2e0925916001ade6e72\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"9ea53fe42281f663d4b132d61d8cf73752df59a609dda66d5c5cf9e6f6c13cac\"" Apr 13 19:31:04.099199 containerd[1620]: time="2026-04-13T19:31:04.099017452Z" level=info msg="StartContainer for \"9ea53fe42281f663d4b132d61d8cf73752df59a609dda66d5c5cf9e6f6c13cac\"" Apr 13 19:31:04.099851 containerd[1620]: time="2026-04-13T19:31:04.099715833Z" level=info msg="CreateContainer within sandbox \"aa87fc31d88b1a43b8aa84cb121a5a70d20257497ef54d6c836437ef8441203a\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"8b5a82581be41096a03aea3cbe891babb01fc4b888ccc720d38527fdb4728365\"" Apr 13 19:31:04.101215 containerd[1620]: time="2026-04-13T19:31:04.101181435Z" level=info msg="StartContainer for \"8b5a82581be41096a03aea3cbe891babb01fc4b888ccc720d38527fdb4728365\"" Apr 13 19:31:04.111488 containerd[1620]: time="2026-04-13T19:31:04.111419609Z" level=info msg="CreateContainer within sandbox \"57fda3b9c98006334080e1bc22ddd4f9066d08bbb1fa434d7c4c2e9fde21ca54\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"8ec13345372714431621cc057baee428e79cc6859fa666d65685b4859fbc6e1d\"" Apr 13 19:31:04.112515 containerd[1620]: time="2026-04-13T19:31:04.112488821Z" level=info msg="StartContainer for \"8ec13345372714431621cc057baee428e79cc6859fa666d65685b4859fbc6e1d\"" Apr 13 19:31:04.224822 containerd[1620]: time="2026-04-13T19:31:04.224274474Z" level=info msg="StartContainer for \"8b5a82581be41096a03aea3cbe891babb01fc4b888ccc720d38527fdb4728365\" returns successfully" Apr 13 19:31:04.226613 containerd[1620]: time="2026-04-13T19:31:04.225796354Z" level=info msg="StartContainer for \"8ec13345372714431621cc057baee428e79cc6859fa666d65685b4859fbc6e1d\" returns successfully" Apr 13 19:31:04.233348 containerd[1620]: time="2026-04-13T19:31:04.233201722Z" level=info msg="StartContainer for \"9ea53fe42281f663d4b132d61d8cf73752df59a609dda66d5c5cf9e6f6c13cac\" returns successfully"